[MUD-Dev] about MOO

Dan Root dar at thekeep.org
Wed Dec 1 20:43:30 New Zealand Daylight Time 1999


In message <024e01bf3b6a$49269260$18095381 at POINTSMAN>, "Jay Carlson" writes:
>> Indeed, but as you mention below, one of MOO's strengths is that it's got
>a
>> rather large base of people who know the internal coding language and have
>> a fair grasp of how the system works overall.
>
>Reasonable-sounding proposals for changes to the mainline MOO language often
>founder on two design criteria:
>
>1) They must not require beginners to learn complex things to produce
>reasonably correct code.  Fancy security models, mutable values, and
>programmer-visible multithreading fail this, even though they appeal to the
>CS weenie in all of us (even the ones saying no!)

Ick.  Agreed fully here.  One of the strengths of MOO and CoolMUD both is
the relative simplicity of their languages.  As much as it's nice to bea
ble to implement entire USENET servers and clients and an NFS protocol
stack in the language, most users don't need that power, and flounder in
the face of it.  This is one of my personal issues with both ColdX and the
LPC based servers.  I want to be able to focus on the issues of the game,
not of keeping track of sockets.

>2) You Must Not Break LambdaCore.  Or JHCore.  A naive linecount of JHCore
>and the C server source shows JHCore winning, 47kloc to 36k.  This is just
>peanuts compared to LambdaMOO itself, at ~900kloc.  Sure, a lot of
>LambdaMOO's code is just cut&paste, but you'd still have to check it.  So
>any semantic changes that require manual review of any decent fraction of
>this code are just right out.  Oh, and I'm not counting any non-code object
>components in this; if changes in the language required significant code
>changes, there'd also be a lot of documentation to rewrite.  And then
>there's LambdaMOO politics and accounting to consider as well---this has
>significantly complicated lightweight object proposals.

This is one of the reasons why, were I to actually write the code, I'd be
looking more at adding the MOO features to CoolMUD than vice versa.
Inertia can be very hard to overcome, even in the name of progress.

>If you manage your cache in terms of the number of objects loaded into it,
>you will lose badly.  Object sizes vary wildly.

True enough.  Look at something like $spell in LambdaCore.

But even still, it's not terribly hard to do a scheme where individual
attributes on an object are stored seperately, with an 'index' base object
used to allow for things like run time attribute inheritence.  This is what
I'm looking at for my own server design.  Uber does something like this.
Since it uses a b-tree database as the backend for storage, each object is
stored individually, but you can address the whole object by looking at the
subgroup of the tree that is prefixed by a particular object number.

>> CoolMUD's database does this, and seems to do a pretty decent job.  That
>> said, more than something like a mail index, the place you'll lose most
>> heavily is on MOO's inheritence.  Most game cores (Lambda and JHM both
>> suffer from this) have a fairly deep and sometimes rather broad object
>> inheritance trees.  It's possible to have a siginificant portion of your
>> cache filled with nothing but items that are inherited in your active
>> objects.
>
>On LambdaMOO, there are only ~2300 objects with children out of ~80k.  That
>doesn't seem that excessive to me.

I'd been led to belive by a friend who played Lambda far more than I, that
the numbers were rather different.  Given hard(er) stats, I'm inclined to
agree with you.

>> OTOH, it's certainly possible to cache quite a bit and still see a
>> significant reduction in active memory usage.  LambdaMOO was using upwards
>> of 256 megs of memory at one point, and I'm sure that number isn't going
>> down.
>
>The LambdaMOO machine has 256M of physical memory.  The highwater process
>size sits somewhere between 330M and 360M, depending on how fragmented it's
>gotten.  The initial highwater mark is misleading, because there's some
>bookkeeping done during db load that's free'd in one big chunk before the
>server starts running.
>
>Memory usage went down as of about a year ago due to some hacks Ben and I
>did.  The size of the database, which is otherwise *roughly* correlated with
>memory size, is held constant by various population control measures.

I have to idly wonder what the size of the database would be if no control
measures were in place.  Is it reasonable, or even possible to support an
environment where there are no (or next to no) restrictions on creation and
building?

>> If even 1/10th of your objects were simultaneously active, and that
>> many more needed for the inheritence of the active objects, that's still
>> reducing your active memory image by nearly 4/5ths.
>
>But *which* 1/10th?  Again, object sizes vary wildly.
>
>Anyway, all other things held constant, reducing LambdaMOO's memory size by
>4/5 this way would be a *lose*.  The machine has a certain amount of
>physical memory, and treating the 200M freed up this way as merely disk
>buffers to thrash objects in and out of by expensive marshalling and
>unmarshalling seems...unproductive.  The only win I can see is improvement
>in locality, but you could lose that too if you weren't really careful.

I suppose this depends on what your ratio of active to inactive objects is,
and how expensive your data marshalling routines are.

The couple of servers I've run have all been quite small by comparison to
Lambda, but the trend I saw was always that no more than 10% of my userbase
was logged in at once (particularly if you include "inactive" players), and
that as a whole they concentrated in no more than a dozen different
locations, with between 5 and 8% of the random miscellaneous objects being
present in those locations and the pathways between them.

In a server with 9000 objects total, this generally worked out to under 600
being active at once.  Hard to compare directly to lambda, since none of
these were MOO based, but the idea remains the same, a relatively small
percentage of objects are active at the same time, even in peak usage
conditions.  I have no figures for what inheritance figures would inflate
that to, but given the numbers you quote above, I'd be surprised if that
figure inflated to over 1000.  That's still about 10% of the total base.

If I could reduce the size of my server to even 20% of the size of the
database (eg assume that larger objects tended to end up in the cache, as
theyy're more likely to be inherited), as a rule, this means I can put 5
different servrs on the same machine, without bringing it thrashing to it's
knees. :)

>I know, the point is not to hold all other things constant :-).  But once
>that restriction is lifted, I'd just go get a ~$3k 512M dual P2/450 to
>replace the SC1000 it's running on now and see what happens before spending
>any more sleepless weekends working on performance.

Ah, but efficiency is always the holy grail of us CS-theory-weenies, right?
We like spending months thinking about these problems. :)  Throwing
hardware at it is the MIS solution. ;)

>> Cool and MOO's languages are not quite identical, but close enough that
>> anyone using one can use the other, more or less.  The primary differences
>> are around the features of the languages.  Cool has tables/dictionaries,
>> which MOO doesn't, so MOO makes up by having some more powerful list
>> manipulation features instead, as an example.
>
>The mainline MOO server is getting dictionaries merged in in the near
>future.

Any ideas on the timeline for this?

	-DaR
--
/* Dan Root   -   XTEA cipher */  static unsigned D=0x9E3779B9,l=0xC6EF3720,s;
/* t=64bit text, k=128bit key */  #define m(x,y) ((x<<4^x>>5)+(x^s)+k[s>>y&3])
void enc(int*t,int*k){for(s=0;s!=+l;){t[0]+=m(t[1],0);s+=D;t[1]+=m(t[0],11);}}
void dec(int*t,int*k){for(s=-l;s!=0;){t[1]-=m(t[0],11);s-=D;t[0]-=m(t[1],0);}}



_______________________________________________
MUD-Dev maillist  -  MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev



More information about the MUD-Dev mailing list