orion at pixi.com
Thu Aug 14 01:23:27 New Zealand Standard Time 1997
On Wednesday, August 13, 1997, clawrenc at cup.hp.com wrote:
> In <199708080243.QAA24348 at mail.pixi.com>, on 08/07/97
> at 09:38 PM, "Dan Armstrong" <orion at pixi.com> said:
> >I would like to finally introduce myself.
> Welcome. (Tardy I know)
> >For storing objects I use a tree that is twenty levels high, based on
> >the following pattern. Each level in the tree, except for the last,
> >holds four smaller pieces of the tree. If any of those four are
> >null, then there aren't any objects in the area covered by that
> >piece. The last level in the tree is either null if nothing is
> >there, or points to the head object of a linked list of every item
> >that is at that location.
> What do you do when Bubba dumps 50,000 individual pieces of gold,
> pebbles etc all at the same location? Have a lost 50,000 items long?
Gold already clumps together. It would be stored as one object of gold
containing 50,000 pieces. I will code other small objects which don't
need any individuality in a similar manner. In the event that Bubba
does manage to find enough objects to get 50,000 objects in one place,
then I will have a linked list of 50,000 items to go through.
I am considering having a maximum amount of objects at any given
location. Maybe I could make it so if you stack too much stuff in
one place, it falls to the sides. In any event, I would like to see Bubba
gather enough objects to cause mischief. If he does, the database
as is will suffer.
> >Open areas are compressed well by the database, as are groups of
> >objects. I've discovered ways of efficiently handling sound and
> >finding all objects within a given area, but what I am currently
> >working on is figuring out how the look command will work.
> >Every Living has a minimum size of object that they can see, a
> >maximum distance they can see, and a height for their point of view.
> Ergo Bubba can't see mount Kraktoa erupting if it just beyond his
> vision range? Wouldn't it make more sense to have the CanISeeIt()
> function use a perspective scale such that the size of object/event
> available to be seen ranges from very tiny close up to huge at great
I'm trying to decide on a way to reduce the area of searching for objects
that Bubba might be able to see. I was thinking of the sight distance
as being a horizon for the look processing.
> >If I am three feet tall, and you are six, you will be able to see
> >over a five foot wall, while I cannot. If you lift me up, then I can
> >see over. If a dragon is flying 100 meters in the air, and I am
> >three feet tall, stepping right up to the wall I will not see the
> >dragon flying. If I then take a couple steps back from the wall, I
> >will see the dragon flying beyond the wall.
> How do you determine the fact of visual obstruction?
> How about the case (mapped from overhead):
> X Y
> The #'s are an infinitely high opaque wall.
> Z is a dragon.
> X is a player who should be able to just see a fragment of the
> dragon (not all) about the end of the wall.
> Y can't see the dragon at all as the wall hides it.
> The dragon can just see X's head.
> The dragon can't see Y.
> Now how about if the wall is translucent:
> Q can see both X and Y dimly thru the wall.
> Z of course can see Q.
> X and Y can't see Q.
> Next up is transparent.
I am trying not to be too exact. I should be able to get the effects of
hiding behind objects without determining how much of the object is
visible. For simplicity, I am saying that if any part of an object is
visible, then the whole object is visible.
I haven't even considered transparency. If I were to support it, I would
have it be a characteristic of the object which would determine whether
to have it block vision behind it. Similar to invisibility, but the object
is still seen.
> >The processing of objects and their positions is not a big problem...
> Curious: How do you do your range generation? How do you determine
> what objects to exclude on the basis of visual interference? Ray
The actual coordinates of objects are not stored on the object, but instead
they are calculated from the database. While searching from largest to
smallest contained squares, the routines which find the objects keep
track of the positions of the objects found. At that point, there is a
collection of objects and their current coordinates. From there it is
a little Pythagoras, and the range is done.
If the absolute coordinate of a single object is desired, it can be
determined by looking back up from smallest to largest containing
squares. Rather easy because each step up determines one
bit of the x and y coordinates, to end up with the full 20 bit coordinate.
To determine which objects to exclude do to visual interference, I am
taking the approach that the players vision is a single plane. I start
with the closest objects and work my way outward, marking off the
area of the players plane of vision that is used. All objects will
be handled rectangularly, for calculations sake. An object will be
seen if at least one point of the players vision plane is referencing
that object. The definition of the vision plane will determine how
much processing will be required and how exact the representation
of the objects will be.
I intend on the client doing the work of converting objects and
positions into intelligent English, but I need to do some checking
server-side in order to not send too much information across the
> >...but how to give the player an intelligent description of what they
> >see is difficult. I could simply list everything from closest to
> >farthest, but that would not be very intelligent.
> Order the list of ranged objects by range, group them with the list by
> proximity, group the proximate groups by angular visual proximity (ie
> direction), order the resultant groups by interest level (volcanoes
> are interesting, fleas farting are not), cut off all groups beneath a
> minimum interest level, generate text moving from the highest interest
> group on down.
I believe that John G has already posted mentioning that we might use
slightly different font sizes to represent the importance of the
information. In any event, things will need a measure of importance,
so a player does type look and see something like:
You see a wide river running down the side of a snow covered
mountain. A small rodent is nibbling on some acorns, a bumblebee
is flying near a flower and a giant spider is scurrying towards you.
Every aspect of looking is still being thought out. Nothing has been put
to code or is set in stone. I am still looking for what approach will lead
to a playable game. One which is not too slow or too detailed in listing
objects, but does not leave out relevant information.
More information about the MUD-Dev