Event handling (was: request for comments)

s001gmu at nova.wright.edu s001gmu at nova.wright.edu
Mon Jan 12 10:13:29 New Zealand Daylight Time 1998


On Sun, 11 Jan 1998, JC Lawrence wrote:

> On Fri, 9 Jan 1998 13:51:24 PST8PDT 
> s001gmu <s001gmu at nova.wright.edu> wrote:
> 
> > On Fri, 9 Jan 1998, Vadim Tkachenko wrote:
> 
> >> s001gmu at nova.wright.edu wrote: > > On Thu, 8 Jan 1998, Vadim
> >> Tkachenko wrote: >
> 
> > *nod* our end goal is to not make it obvious to the player that the
> > game is turn based, but the underlying mechanics will be.  The more
> > that I think about it, the less I like the 'tick' idea... Why
> > artificially impose a less granular clock on top of the system
> > clock?  Why not just let the system clock determine the timing?
> > Situations where the character's speed should be far faster than the
> > players typing speed (IE: combat, etc), can (and should?) be handled
> > by the computer (IE: the computer is generating the events, not
> > player commands).  All other commands don't need to be turn-based,
> > as speed has already been deemd uniportant by the designer (me), by
> > that system not being coded as an auto-generated event.
> 
> This sounds very much like the the argument I went thru with myself.

Well, keep in mind I'm not arguing for events to be processed as fast as
possible...  there is still a game imposed delay inherent to every action
(save out-of-game actions, like "score", or "help!").  I know you've
mentioned a few times, JC,  that kookey idea that humans shouldn't be
waiting on computers...  ;)
 
<snip event chains vs. event-process model>

<...>

> In the case of sensitivity to interruption I currently handle this
> poorly (almost not at all).  
> 
> However, previously I discussed having the system track the osers
> activities, and from that attempt to determine that they user's
> _intentions_ are.  From there it can appropriately attempt to filter
> the available data against a dynamic LOE (level of expectation) filter
> which changes as the system changes its understanding of the user's
> goals.
> 
> This may be extensible to handling interruptions.  Essentially (all
> Wiggin's solution) the server would track the activities (of duration)
> that the user is engaged in, and dynamically remove and add entries as
> the world changes.  Some things would get deleted from the list (you
> cease to dig the Panama Canal as the world is obliterated by a passing
> Vogon fleet), and others would merely be surpressed temporarily (you
> pause in digging the Panama Canal to eat a vegemite sandwhich before
> returning to digging).

I'm not sure that I like these kinds of filters, etc.  The computer trying
to figgure out what I am up to seems prone to error.  Heck, other ppl
trying to figgure out what I'm up to is prone to error.  I think the
problem you are trying to solve is, when the user types 'eat sandwhich'
whilst diggging the panama canal, they most likely mean 'pasue for a while
to eat this sandwhich, then get back to work'.  You want to automate that
returning to work, because forcing them to re-type the 'dig panama canal'
command leads to tedium, and possibly some confusion on the parser's part
(what do you mean, dig panama canal?  it's half built already!).

Why not offer a "pause to/for" operator, and push the resposibility of
realiing theyw ant to continue the action after a small break onto the
user?  This greatly simplifies the problem of event interruption, as you
don't have to worry about wether the player intended to continue the
action or not.
 
> > That being the case, I'd still prefer to let events spend their
> > delay time on the queue, instead of in a thread.  Each sleeping
> > thread is a thread that could be used by something else.  Why
> > allocate a scarse resource before it's 100% needed?  Again, this
> > harks back to my initial goals, of building a non-cpu intensive
> > game... well, at least, less so than other games... ;)
>  
> Precisely.

The downside to this, as Vadim pointed out in a previous post, is that you
trade a scarce resource for more CPU cycles spent on event scheduling,
etc.  Leans back towards the age-old memory v. computation cycles
tradeoff.

I snipped the rest of this message to be handled in a seperate response,
as it is wandering away from the topic of event handling some... and I
have a meeting in 20 minutes and prolly can't finish responding before
then.  ;)

-Greg




More information about the MUD-Dev mailing list