[MUD-Dev] Persistant storage.... My current idea.

Chris Gray cg at ami-cg.GraySage.Edmonton.AB.CA
Sat Apr 4 11:33:42 New Zealand Daylight Time 1998

[Cynbe, as forwarded by Chris L:]

:Meanwhile, I keep a cache buffer in ram of recently accessed objects:
:I'm not comfortable with the time and space overhead of LRU management,
:so instead I manage them essentially as a FIFO queue that objects
:cycle through once loaded into cache:  They are loaded at one end
:and deleted at the other.  Since deleted objects will still be in the
:OS file system cache in general, frequently used objects will get
:'read' back in very quickly, giving (I hope, no performance studies
:yet) much of the performance of an LRU system in practice, without
:much of the overhead.
:I allocate a (very) carefully tuned hashtable about 1/8 the size of
:the ram buffer to map abstract pointers to current addresses: This
:mapping takes about 15 risc cycles, which may sound like a lot
:compared to single-cycle access via a hard pointer, but (given a
:little intelligent coding, like looking up a string address just
:once when copying it, not once per byte) turns out according to gprof
:to be consuming only about 5% of my CPU time when Muq is running: I
:feel this is a very reasonable price to pay for disk-based

If you keep an index table in memory all the time, then there will also
be no more than one disk read to get any DB entry. This is all somewhat
similar to what I do in AmigaMUD, but different enough that I think its
time (it being a weekend also gives me the time) to overview my setup.

I made a very early decision which many people would disagree with, but
which has worked out not badly. That decision was to encode the type of
an object in the references to that object, rather than in the object
itself or in fields beside the reference. So, my 32 bit reference
consists of a six bit type code in the upper bits, and a 26 bit index
number in the rest. That limits me to 64 million DB objects, which is
enough for now. The big advantage to this scheme is that all "values"
occupy the same size. 32 bit ints, fixed-point numbers, in-memory pointers,
and typed DB references, all take the same 32 bits. This allowed me to
have a 'value' union, which can be used to contain anything needed. So,
for example, each local variable occupies 4 bytes, regardless of its type.
In many situations the type field is redundant (since my in-MUD language
is strongly typed), but in others it isn't.

I use two disk files for the DB. One is the index file, and one is the
raw data file. After a few header words, index file is an array of small
structs, containing a 16 bit slot length, a 16 bit 'used' length, and
a 32 bit file offset. By having the full size of this struct be a power
of two, indexing it as an array was fast on my early target machines,
which didn't have a 32 bit hardware multiply. The 26 bit index number of
a DB reference indexes this index file, which is kept in RAM at all times.
That yields the file offset of the actual data. The high bit of the
file offset is set if the file slot referenced by this index entry is
currently free. Having the 'used' length present allows me to find space
in the in-memory cache that is no longer than actually needed, and also
reduces the actual disk-read length to the minimum. Having the slot
length present sometimes allows entries to be replaced in-place if they
don't grow too much. An in-memory table contains pointers to some of the
free file slots, which allows best-fit allocation from among that set.
It is replenished by scanning the entire index occasionally, and freed
DB slots are put into the table, so that they are re-used in preference.

Backing up the actual database file is an in-memory non-write-through
cache. It is maintained in an LRU ordering via a linked-list (so the
LRU maintainance is quite cheap). There is also a list of free cache
slots. The cache is flushed to disk in whole, when top-level MUD code
requests it, or when the DB code cannot find a free or non-dirty cache
slot that is big enough for a current request.

Providing a quick short-circuit to the LRU mechanism is a map table,
indexed by the DB index modulo the table size. The vast majority of
DB lookups hit in this mapping table. Such lookups probably take on
the order of 30 instructions, including some statistics code and the
LRU maintainance.

The system also has some higher-level caches, such as for MUD-code
routines. I use a reference-counting scheme to know when DB entries can
be freed, so there is potentially a lot of DB-writing to modify them. To
counter this, I have another mapping table, the entries of which store
the current ref-count, along with the DB's idea of the ref-count. These
mappings are also non-write-through, thus eliminating most of the DB-writes
for ref-count fiddles. As an example, consider what happens when something
being carried by a player is dropped. In my system, it is added to the
contents list of the current location, and removed from the list of things
being carried by the player. This is two ref-count changes, but they
cancel out, a situation handled nicely by the ref-count mapping table,
resulting in no dirtying of the DB entry for the item itself.

Whew! That's likely far more detail than anyone wanted!

Chris Gray   cg at ami-cg.GraySage.Edmonton.AB.CA

More information about the MUD-Dev mailing list