[MUD-Dev] Re: MUD Development Digest

J C Lawrence claw at under.engr.sgi.com
Wed Apr 22 11:40:15 New Zealand Standard Time 1998


On Sat, 4 Apr 1998 07:04:14 PST8PDT 
Cat <cat at bga.com> wrote:

> I presume the comments about disk-based muds running faster than
> memory-based ones are including the tacit assumption that one is
> talking about muds that don't have the option of running on a
> machine with a large surplus of RAM?  

Yes, but only just.

Aside: I've done a lot of work on large RAM machines (the machine I'm
typing on now has 256Meg RAM, which is piddley compared to the 32Gig
RAM sitting behind me).

Even given a technical surplus of RAM (ie physical RAM is larger than
total working set of system), many systems will continue to swap.
Why?  File/disk cacheing, long idle memory pages, etc -- the OS
typically thinks it knows better than mere applications where best
performance lies.  Yes, this can be turned off in a variety of
fashions.  The two most common means are to mark particular memory
pages as non-swappable (not supported by all OS'es), or to instruct
the entire OS not to swap (scalability and load problems).

However, a disk-based DB with intelligent cacheing can still
out-perform an all-in-RAM system in a few cases (highly dependant on
working set characteristics (no, this ain't common, but I've seen it
in practice)).  The base cause is still the same: memory page faults
-- except that this time the expense is not disk/swap IO, but memory
model context.  (The heap fragmentation you referred to later is a
prime form)  A well written cacheing system will keep all cached items
proximate in memory, thus minimising the memory context
shifts...(highly dependant on CPU and MMU memory architecture).

> I run a commercial project,
> and it seems the only option for really optimal performance is to
> make sure you have enough RAM to keep everything in memory that you
> need.  

<nod> 

In the general case I doubt that DB<->server IO is the likely
bottleneck, especially given expectable 'net bandwidths to the
consumer in the next 5 years.  Physical IO to the machine is much more
likely to be the real plague.  (What!  You want X,000+ active socket
connections with good bandwidth to all?  Get an AS/400 or Sys360....It
still amazes me that commercial MUD offerings are even attempting to
build on PC hardware given its atrocious IO hardwre).

Its probably a safe prediction in a commercial setting that worlds and
world databases will grow large.  How large is LambdaMOO's core image
now?  (I have no idea).  Add a fully rendered world of many times that
size and I would have no problem seeing the core image exceeding
100Gig, and RAM remains expensive at those levels -- especially for a
startup.

> I also don't do a bunch of dynamic stuff - I prefer to do all
> mallocs and loading of maps and objects at startup, and keep it
> there.  

Sooth.  Custom heap managers are your friend.  My own heap manager is
based in spirit on LanbdaMOO's:

  I pre-allocate large pools of memory blocks of sizes of various
powers of 2.  All later allocation requests are expanded to the next
larger or matching power of 2, and a free block from that pool is
handed back.  Minimum block size is 64 bytes (I think, could be 512).
Pools start by doubling their size when they run out of space, until
they reach a total of 1024 (actually its per-pool configurable)
allocations and then proceed to grow at a rate of 1024 blocks per
growth spurt (also configurable).

> Anyway, my server can currently handle over 150 people quite well
> with a memory footprint of under 32 megabytes.  

I would have expected world size to be the greater memory consumer.
Users merely consume IO.  How large is your world in terms of the time
required for a user to traverse its greatest dimension?  How detailed
is your world in terms of LOD?

> I was planning
> to set up a seperate process that does all disk writes, and have the
> other processes dump data to it and then go on about their business.

Not a bad idea as it allows the maintenance of logical consistency for
the permanent storage to be extracted from and no longer dependant on
(per se) the main server.

> But a friend of mine told me about how they eliminated their
> disk-writing bottlenecks when they purchased a RAID array for the
> server, which essentially does the same caching of writes into RAM,
> only it's done for you, in hardware.

Be careful here. All RAID is not created equal.  All RAID adapators
and controllers are not created equal.  OS drivers for RAID hardware
(where applicable) are also often very unequal.

Some permutations:

  RAID in standalone hardware.  Good examples are HP's NIKE (Date
General) and Edison boxes.  To the outside world (ie your machine) its 
just a hoking big disk, or a collection of disks.  All the RAID
smarts are done in the disk tower, configured via serial terminal or
LCD panel.

  RAID in the SCSI adaptor.  DTK make the fastest, bar none, RAID SCSI
adaptors for PC's (cf SmartRaid).  Their SmartCache cards are no
slouches either.  In this model configuration of the card or its
drivers via software on the host is resposible for the RAID model.
Outside the card its just a bunch of dumb disks.  DTK's OS/2 drivers
positively scream.  I assume their MS drivers are of comparable
quality.  The Linux, *BSD, UnixWare and SCO drivers however are so
poor as to be pathetic.  Buslogic's RAID cards, which otheriwse don't
compare to DTK's, far outperform DTK under Linux due to driver
quality.

  RAID in software.  You use a dumb SCSI card, and dumb disks, and
then use intelligence in the SCSI drivers to do RAID via software on
the host.  Linux supports this fairly well.  There are obvious
performance and stability penalties.

Cost generally decreases as you descend the above list.

There are also various forms of RAID each with their own performance
characteristics, from RAID 0 (simple mirroring which offers the
fastest possible read/write speeds), to RAID 5 (striping with parity)
which tends to give fair read speeds, but very poor write speeds.
Other RAID levels have other characteristics.  

Cacheing can be thrown at the problem at any level, from caches on the 
physical drive  (the HP C3010 drives I'm trying to shift now (got 100
of them) have 256K of dual ported cache on the actual drive) to cache
on the RAID box (a typical NIKE will have 32Meg of cache or more),
cache on the SCSI adaptor (DTK cards can carry up to 64Meg of cache),
or OS/driver level caching using main system RAM.

<chortle> I spent 18 months at HP working on their HA products, RAID
arrays (NIKE, Edison, Icicle etc), EMC towers (anybody want 40
terrabytes in a box the size of a 'fridge?) etc.

> I realize that purchasing a RAID array isn't an option for most
> hobby projects.  

RAID is getting cheaper.  DTK continues to make the fastest RAID (and
non-RAID) cards for PC's, bar none (SmartRAID and SmartCache
respectively).  Drives are now almost throw-away items:  I have stacks 
and stacks of 2Gig HP C3010's I'm shifting for $75ea...  They make
cute fast little RAID towers.

> Still, I would think that disk-based servers would
> be faster for some muds, not for all muds.  (Depending on memory
> footprint, configuration of the machine in question, and whether
> it's running other things besides the mud or not.)

True.

--
J C Lawrence                               Internet: claw at null.net
(Contractor)                               Internet: coder at ibm.net
---------(*)                     Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...

--
MUD-Dev: Advancing an unrealised future.



More information about the MUD-Dev mailing list