[MUD-Dev] Re: DIS: Client-Server vs Peer-to-Peer

J C Lawrence claw at under.engr.sgi.com
Mon Nov 30 20:26:05 New Zealand Daylight Time 1998

On Sat, 28 Nov 1998 12:02:00 +0100 (MET) 
Niklas Elmqvist<d97elm at dtek.chalmers.se> wrote:

> Sorry for the late reply, but I am currently at my parent's home
> and surfing on a crappy 14.4k modem...
> On Wed, 25 Nov 1998, J C Lawrence wrote:
>> On Tue, 24 Nov 1998 22:06:23 +0100 (MET) Niklas
>> Elmqvist<d97elm at dtek.chalmers.se> wrote:

>> AI's dynamically and transparently replaced in an ad-hoc manner
>> by humans such that protagonists are unable to transparently
>> determine if their opposition is AI or human?  Neat.

> Indeed! If no human players are on, the entire battle would unfold
> under the control of a lot of AIs... Or maybe not, since we don't
> want to waste a lot of computing time when there is no one there
> to witness it ("does a tree falling in a forest produce sound..." 
> and all that stuff) -- some kind of statistical number-cruncher
> could be brought into play instead.

I've found it useful to leave the AI's running "unchecked" to allow
for simulation accuracy and logical correctness checking.

> More hype: The idea is to allow for a lot of different types of
> simulations cooperating on the same battlefield. It all boils down
> to human players generally not wanting to play cannon-fodder roles
> such as infantry charging against a machine-gun nest (though I
> expect some players would like to sit behind that machine-gun
> instead), so the AI is there to take care of that. In addition,
> the AI will make it possible to run large-scale scenarios where
> only a small fraction of the individual units are controlled by
> human players. 

This sounds a whole lot like a number of ideas that Ling has been
talking about for a while now...

> Needless to say, this might require a range of different clients
> to support all of these roles, but they could hopefully be built
> around the same skeleton client structure (some of the client
> stuff could probably be solved by dynamically loading
> functionality from shared libs).  

TkMOO lite would seem an obvious starting candidate and gives you an
example to demonstrate OSS characteristics in your project as well
as shortening your development cycle (valuable in a project this

> Also, I would like to be able to support different types of
> scenarios -- not just D-Day, but also Star Wars fleet engagements,
> Tolkienesque fantasy battles as well as claustrophobic Alien
> fights.

Presentation models are going to be a concern then.  Fully 3D models
are a *LOT* different at the interface level than 2D models, most
especially when the viewpoint becomes highly mobile.

> Ahem. Looks like I got a little carried away up there, but now you
> know :)

<kof>  Umm, never done that.

>> DIS has a number of problems for this type of application which
>> can be mostly summarised into:
>> -- Assumes that all nodes are equally trustworthy.  -- Assumes
>> that bandwidth is free.  -- Assumes that nodes have infinite
>> resources.  -- Has no explicit security model (not even an
>> entrance gatekeeper).

> Ahh. Thanks. This neatly summarises the 'gut feelings' I've had
> about things that must be addressed in the standard.


>> W need not have any relation to V, and at any instant the actual
>> values for both are not only unavailable, but a pseudo-heisenberg
>> ensures that any data you have is stale dated and of uncertain
>> relevance to current conditions (ack latency).

> Ahh, familiar stuff! 


> Thanks for bringing Heisenberg into this, although I am not
> exactly sure how it applies... :) Seems to me, there should be two
> different entities in conflict (as in position and impulse in the
> real Heisenberg principle)? Latency and...?

You can know what the latency is between two particular nodes at any
instant in time, but that gives you no accurate predictive value as
to what the latency will be at any other instant in time.  Sort of a

>> And no, you can't use IP multicast/broadcast as a carrier for
>> your games (alas).

> This is because the internet as a whole does not support it or
> does not support it very well? Multicast does seem useful.

Multicast is extremely useful, but very few ISP's support it (yet),
and limiting your 'net clients to those with multicast capapble
TCP/IP stacks cuts your available public massively (kernel compile
option for Linux IIRC).

>> Conversely moving to a client/server model allows the server to
>> act as a latency mediator (appropriate use of predictive
>> algorithms comes in *really* big here).

> Seems to me, in a peer-to-peer architecture, each client is
> essentially a server at the same time (duh). 


> This means that all clients must be able to handle a lot of
> bandwidth and computation.

Minimally they must be able to handle computation required for
computing all events that they are either directly responsible for,
or which principally affect them, or some other similar
everything-that-XXX'es-this-node type qualifier.  The exact choice
of qualifier really doesn't matter much as any seperation of compute
chores merely means that some other node is going to have to do it
instead (ie the sum computational load is constant, only the
evenness of the distribution across nodes varies).

> With the client-server arch, the server (which probably is a *lot*
> bigger than the client in terms of computing power and bandwidth)
> does the dirty work and then dishes the information out to the
> relevant clients (so that client X only gets status messages of
> the units it can see, not *all* of the units in the world). 

Remember: There are a whole lot of gradations between
dumb-client/smart-server and smart-client/dumb-server.  Given
current PC performance, I'd be tempted to offload as much of the
processing as possible to the clients and then (as much as possible)
use the server for only the "critical" computations and latency

>> Do a quick web search on the various hacks to the UOL protocols.
>> Its been ugly.

> I guess so. What can you do to protect yourself against this?

Not a whole lot but realise the problem in advance and try and
design against it.  Most of security really boils down to applied
paranoia: "Who and what do you trust and why do you trust them?"
Just think about that when ever you look at one of your protocols or
data exchanges and you should cover most of the holes on your first
pass (which you'll hopefully throw away), and be well set for the
second pass.

>> Client/server doesn't scale where clients are able to exceed the
>> sum IO or CPU bandwidth of the server.

> Yes, but the solution is to buy a bigger and better server, right
> (as opposed to upping the client system requirements another
> notch)? 

Either works.  That saaid you'd be surprised what it takes to
saturate a T1 when you're dealing with compressable data and light weight
protocols.  (cf HTTP and static text pages)  Judicious use of UDP
and error-tolerant protocols can also help.

> It's the IO bandwidth (both in terms of network bandwidth and the
> IO throughput of the computer) which bothers me somewhat. Some
> MMPOG games (such as Middle-Earth On-Line, I think, as well as the
> Awakening Project) promise in excess of ten thousand clients
> connected at the same time.

Yup.  Its actually not that difficult to do if you're very careful
in your protocol and data model designs.  Just be very very careful
to keep things light and to ensure both ends never need to delay or
timeout before sending the next stage of the protocol.

>> Peer models, by their very nature, assume a level of trust among
>> peers.  That assumption is explicitly false when dealing with the
>> Open Internet.  Peer models assume that individual nodes have
>> good connectivity (latency and bandwidth proportional to the
>> task), and that a node is responsbile for their own connection
>> characteristics (latency and bandwidth). Both aspects are false
>> on the Open Internet due to intervening connection
>> characteristics (eg router between X and Y goes down) or lack of
>> local 'net control (Denial of Service attack against node or
>> subnet (a not uncommon tactic with some UOL players is for a
>> player on a fast connection to swamp a player on a slow
>> connection's link with large ping packets (eg 64K pings),
>> rendering that player's character easy prey (they can't run, they
>> can't fight, they can't hide)).

> I guess that if the clients are unaware of the IP:s of other
> clients, it is easier to avoid these kinds of denial of service
> attacks, right? 

True.  Realise however that hiding the IP's requires that there be
NO traffic which originates from one client and goes directly to
another client as that would allow the wire to be snooped or the
client hacked to trap such and report the remote IP.

> This is addressed automatically in a client-server setup, whereas
> a peer-to-peer architecture requires the clients to know the IP of
> each other.

Quite.  I'd be tempted to use mutiple servers, potentially on
seperate machine, say one to handle game mechanics and another to
handle computation-free data relaying between nodes.  Its not a huge
savings, but everything helps in the end.

>>> Basically, I am looking for all the good arguments why 
>>> client-server should be used instead of peer-to-peer as well as 
>>> the advantages and disadvantages of each approach.
>> The above do?

> They certainly do. Thanks a lot.


>> This is why the request to Y doesn't come from X. but instead
>> comes from Z, effectively attempting to make the source of the
>> event anonymous with respect to Y.

> Yes, this does seem a little bit hairy. Client-server has a *lot*
> more appeal.

C/S merely has *other* security concerns.  Its reduces the number of
items in your trust model, certainly, but it doesn't necessarily
reduce the their relational complexity uless you go for ultimately
stupid clients (ie move all the complexity into your server, which
you then tritely assume you can trust).  This is a game of where can
we squeeze the water (security concerns) in the waterbed (project
design).  The water doesn't go away, it merely gets allocated

>> Intelligent clients -- push all the computational load you can
>> onto the client and then implement hairy trust models for your
>> client/server relationships.

> Sounds like a good mission statement.

Hairy trust models?  I've been reading too much UserFriendly.

J C Lawrence                               Internet: claw at kanga.nu
(Contractor)                              Internet: coder at kanga.nu
---------(*)                     Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...

More information about the MUD-Dev mailing list