[MUD-Dev] Re: TECH: Distributed Muds
J C Lawrence
claw at 2wire.com
Wed Apr 18 19:06:34 New Zealand Standard Time 2001
On Wed, 18 Apr 2001 18:52:11 -0400
Derek Snider <derek at idirect.com> wrote:
> According to Vincent Archer:
>> Hmmm, no way. If you do so, you waste a lot of capacity. Since
>> you're running on a distributed system, you cannot lend one spare
>> processor to another zone - well, not in a geographical discrete
>> model like the one outlined above. So you end up with processors
>> that remain idle 90% of the time, but can support a population 10
>> times bigger than what you usually have.
> What is wrong with that? It's called "being prepared".
> From the many years of server administration I've been involved
> with, servers generally run at an average of 80-95% idle.
This is especially true for those peoblem spaces where the cost of
failure in the pessimal cases is sufficiently extreme. A simple
example are brokerages whose systems need to be able to withstand
extreme fluctuations in market transaction rates (cf black monday
Charles Schwab's systems, for instance, croaked on Black Monday,
leaving them open to mucho $$$ in customer suits, lost transactions,
> You always provision for bursts of activity. CPU power is a
> relatively cheap asset.
CPU is often not the biggest question. Data flow model (and their
defined latencies), contention points and transactional rates are the
biggies. The problem is not speeding up the rate at which the system
spins, but in reducing the time in which a given transaction can be
processed and minimising the possible points of contention of that
that transaction with others.
> I'm not too clear on that UO's model is/was, only what a good
> working model would be. The above model allows you to take
> advantage of a large TCP Window between the network servers and the
> zone servers.
Or, given that you control the wire and everything else between the
two, to do your own protocol on top of IP (a fairly common approach).
> If you are concerned about wasting CPU power, you might want to
> consider setting up a server cluster (ie: Linux Beowulf Cluster) so
> that you can effectively have one huge mega-server that distributes
> tasks evenly over all CPUs.
Beowulf is really only suited to tasks which divide cleanly and have
very small to no IPC requirements (ie no shared task data).
> I'm not certain that this is the best solution for high-speed
> real-time interactive games, but with the proper configuration,
> equipment and software, it just might be.
Unfortunately shared world systems tend to have both high
transactional rates and large shared working sets, which make them
specifically unsuited to Beowulf setups. Beowulf makes an excellant
compute farm for cleanly divisible problems with long run times --
which is a very different problem space.
J C Lawrence claw at kanga.nu
--=| A man is as sane as he is dangerous to his environment |=--
MUD-Dev mailing list
MUD-Dev at kanga.nu
More information about the MUD-Dev