[MUD-Dev] Re: atomic functions

J C Lawrence claw at under.engr.sgi.com
Wed May 6 10:38:07 New Zealand Standard Time 1998

On Fri, 1 May 1998 15:11:58 +0200 (MET DST) 
Felix A Croes<felix at xs1.simplex.nl> wrote:

> What it actually does is impose an execution order on bits of
> atomically executed code:

>	foo()
>	{
>	    atomic {
>		/* code fragment 1 */
>	    }
>	    atomic {
>		/* code fragment 2 */
>	    }
>	    atomic {
>		/* code fragment 3 */
>	    }
>	}

I suspect that you are confusing some logical equivalences.

>From an execution view point there are units which execute atomically.
What the logical or language scope of those units are is really beside
the point.  They're merely atomic units, and it is from those units
that the world progression is made.  

You are attempting to solve the problem of enforcing order of
compleation of such atomic units thru blocking structures, and to add
the concept of aggregate atomic actions via nesting.  No?  

C&C odering really doesn't have anything to do with the C&C model, and 
is external to the atomicity requirements of the C&C model.  Why?  C&C 
defines the basic unit of existance: the state change.  An individual
state change is insular -- it has no relevance to other state
changes.  It is your __external__ process model which imposes its snes 
of order upon your state changes.

Next up nested C&C's are a logical fallacy for similar reasons (there
are some really good papers on this on the web if I could remember
where), but a useful modelling tool.  Nested C&C's offer the idea of a 
larger aggregate state change being composed of previously define and
known insular state changes.  The problem is that that model doesn't
work at the logical level.  The super-state change has to inherit the
working sets and exit criteria of the nested state changes in order to 
validate its own C&C, and refuse the allow the nested state changes to 
commit until that time.  If it did not it could fail C&C and attempt to
rollback nested events which have already commited.  Logical

  EventA starts.
    Nested EventB starts.
    Nested EventB state-changes X.
    Nested EventB C&C's.
  EventB starts.
  EventA state-changes Y.
  EventB based on X's value, state-changes Y.
  EventB C&C's.
  EventA attempts to C&C, and fails due to Y being modified.
  EventA can't rollback the nested change to X as the change 
    to Y is dependant on it.

> Now, focusing on atomic code and abstracting from the rest, you can
> map the sequence of code fragments to a sequence of atomic events.

> Having events execute in parallel and compete for completion is done
> to get the mud to run efficiently on multi-processor architectures.

That is only one benefit.  The parallel and competitive nature of the
execution also adds sand-boxing benefits, protects the system against
the run-time effects of rogue code, guarantees a minimal performance
rate no matter load, etc.  Essentially the server is now a
multi-tasking OS with the "events" as its client processes.  

> But the atomic code property is also desirable from a software
> design point of view, which is why I am trying to separate atomicity
> and events.  

This doesn't work -- its logically flawed.  An "event" is a state
change.  By their very nature and definition state changes are atomic
in a C&C system.  Attempting to seperate the concepts of atomicity and 
state changes is rather like attempting to seperate an automobile from 
a car.

> It gets especially interesting when you nest atomic
> code, allowing you to make code atomic on several abstraction
> levels.

Look at the *reason* for atomicity.  In the C&C model atomicity exists 
as the definition of what happened.  Nothing exists until it passes

Now looking from a data-centric viewpoint, commits exist to mediate
data access competition.  You can push the point at which a state
change is commited down to the very lowest __logical__ level (that
logic being external to the data) -- you can't go any lower than that
without problems in logical rollback.  Or you can group multiple
logical state-changes into a single commit -- you can group as much as 
you want there at the increased risk of C&C failure.

So, there's a scale.  The lower the granularity of your commit model,
the larger the fraction of your processing time is going to be spent
in C&C overhead.  The larger the granularity of your commit model, the 
lesser the fraction spent on the C&C overhead, but the greater the
chance of C&C collisions.  

You takes your picks, and suffers the results.

J C Lawrence                               Internet: claw at null.net
(Contractor)                               Internet: coder at ibm.net
---------(*)                     Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...

MUD-Dev: Advancing an unrealised future.

More information about the MUD-Dev mailing list