[MUD-Dev] Re: TECH: Distributed Muds

J C Lawrence claw at 2wire.com
Wed Apr 18 16:36:03 New Zealand Standard Time 2001


On Wed, 18 Apr 2001 10:40:12 +0200 
Vincent Archer <archer at nevrax.com> wrote:
> According to Derek Snider:

>> If you have a game that you expect to handle 100,000 players, and
>> you have 500 "zones", you cannot expect that each zone will have a
>> maximum of 2,000 players.  If you want your game to handle 100,000
>> players, you'd better make sure each zone (especially starting
>> zones, hometowns and popular locations) can handle at least 25% of
>> your game capacity

> Hmmm, no way. If you do so, you waste a lot of capacity. Since
> you're running on a distributed system, you cannot lend one spare
> processor to another zone - well, not in a geographical discrete
> model like the one outlined above. So you end up with processors
> that remain idle 90% of the time, but can support a population 10
> times bigger than what you usually have.

You are arguing scaling models, a field that is fairly well known and
documented.  At this level you can roughly consider that there are two
divisions in models:

  1) Approaches which attempt to factor the problem space
  predictvively at design time into manageable chunks which are then
  distributed.

  2) Approaches which attempt ot factor the problem space at runtime
  based on current load definitions into manageable distributions
  across available resources (as also computed at runtime).

Both work.  In an ideal world with perfect prediction, the two will be
effectively identical.  The problem is that prediction is never
perfect, so you engage in a game of "good enough" approximations.  #1
is simpler.  #2 is more promising, but at the expense of engineering
effort, complexity, and (often) system stability.

Possibly the best known approach in the #2 camp are CC:NUMA (Cache
Coherent Non-Uniform Memory Architecture) ala SGI's clusters, which
can be loosely described as a clustering technology wherein processes
and address spaces can migrate dynamically and transparently about the
cluster, with processes and adress ranges moving from system to system
and from CPU to CPU without ever noticing, a computed by the base OS
on the basis of minimum resource use and maximal working set
intersection.  In this way, for instance, address ranges will tend to
migrate (addresses are virtual) from system to system so as to be
physically proximate to the processes that are accessing them.
Similarly, processes will dynamically migrate across and between
systems to be physically proximate to their data and the other
processes also acessing that data.

But, this comes at the expense of complexity, expecially in the design
of your data flow models (and possibilities for lock contention, race
conditions, sequence errors, etc).

> Wasn't that UO's model? If I remember right, UO had two tiers of
> systems.  One was managing the player (who connected to an available
> player box), which then in turn communicated with a bunch of
> (rectangular) world zones.

I can't comment on UO in particular, however the approach of having a
connection server which accepts TCP connections coming in and which
then uses more suitable/easily processed protocols within the cluster
for protocol handling is well established at this point.

> But they had enough bandwidth that a 2nd network interface was
> unnecessary, and they ended up using the same network card for
> Player<->FrontEnds and FrontEnd<->Zone connections.

Aieee.  I've never liked one armed routing games.

--
J C Lawrence                                       claw at kanga.nu
---------(*)                          http://www.kanga.nu/~claw/
--=| A man is as sane as he is dangerous to his environment |=--
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the MUD-Dev mailing list