[MUD-Dev] Re: PDMud thread summary

Jon A. Lambert jlsysinc at ix.netcom.com
Mon Oct 26 01:15:32 New Zealand Daylight Time 1998


On 23 Oct 98, Cynbe ru Taren wrote:
>
> Excuse, I haven't been following recent traffic, had my nose
> buried in paidwork and trying to get my mu* beta out the door. :)
>

Long time, no post. :)
 
> 
> | I think Chris Gray mentioned fixing bytecode memory addresses 
> | at startup, allowing direct jumps into functions.  While a performance
> | boost, it makes dynamic registration and unregistration of modules
> | more complex.  
> 
> I'd suggest computing just how much of a win the address swizzling
> is.  I'll predict the win is less than a three percent speedup in
> typical code [*], in which case one has to wonder whether it is worth
> the extra design, coding, debugging and maintainance effort for
> a speedup few if any users are ever likely to notice...

As long as the design is flexible, an optimization of this nature can 
be postponed, until if and when needed.   

> [*] Based on experience with my Muq design, in which I run -all-
> pointer references through a hashtable with only about a 5% speed
> penalty.  Lets me shuffle objects freely both in memory and between
> memory and disk, which to me is a win worth the price.

Nod.  My experiences with DB cacheing would lead me to believe
that performance is quite reasonable.
 
> | Having the return value, buys nothing either, since the caller may not use 
> | it and wouldn't be able to build a proper mangled name.  
>   
> Function calls and function returns logically have a strong
> symmetry:  It is rather artificial to support multiple args
> but not multiple return values.  Current programming habits
> don't make heavy use of multiple return values, but this may
> change, and to me seems to be slowly changing.  E.g., Perl
> programmers make relatively heavy use of multiple return
> values.

Interesting.  I admit I had not thought along these lines, but it's 
certainly an artificial restriction.  I had envisioned a language 
that was easily understood by novice programmers.  I wonder how 
easily how this concept would go over.

Something like this?

x, y, z = foo( a, b, c);

> | int cast(int time, string spell)  ---->   #magic at cast$ai$as 
> | char foo(char * bptr, bar i)    ---->  #magic at foo$apc$aebar
> 
> Depending on the context, you might want to use a hash of the
> mangles instead of using the mangles themselves.
> 

A very good idea.  Thanks.
 
> | For a standard call format, why not just have the caller push() it's address and then all the 
> | arguments from left-to-right onto the stack then jump to the callee.  The callee pops() them 
> | out and loads local variables right-to-left.  Return would pop() the return address off the 
> | stack and push() the result and jump to the address just popped.
> 
> Sounds like the call sequence is doing an extra stack-to-stack copy.
> Is there a good reason to spend time on this?  Why can't the local
> variables be located where the caller leaves the args?

It depends.  Any comments on how arguments should be passed to 
module functions?  By copy, by unmodifiable reference or modifiable 
reference? 

> | This is pretty simple.  There maybe better ways of doing this.  Message passing perhaps?
> | Thoughts?
> 
> Remember to push the argument count.  Sooner or later you'll want
> to have variable numbers of arguments, and architecting the possibility
> out of existence will suddenly look stupid.  Call/return symmetry suggests
> pushing the number of return values when returning, as well.

Good point.  Although this need only be done for variable argument 
functions.  Functions which match mangled name prototypes will 
inherently know how to pop arguments.  
   
> Don't get obsessed by speed to the point of screwing up clean semantics
> to buy a fraction of a percent speedup:  If speed matters that much,
> you shouldn't be using a bytecoded implementation anyhow.

Yes.  A slow implementation which is functionally cohesive and 
loosely coupled can be tweaked and optimized in many ways, later.  
A design which is built predicated upon optimization concerns will 
quickly become very inflexible and limiting, making some later 
optimizations impossible.

> As a final thought, I've gone to doing everything 64-bit at this
> point.  32-bit architectures are clearly a sunset industry at this
> point, and starting new servers 64-bit avoids issues of converting
> 32-bit dbs to 64-bit architectures during the transition.  I clocked
> less than a 15% slowdown for doing this on Intel, which to me for
> my application is worth the price:  Your milage may of course vary,
> but again, if a 15% speedup matters, should you be using bytecodes
> at all?

Not to mention double-byte unicode concerns.  I agree that 64-bit 
architectures will become de facto standard very very soon.  

--
--/*\ Jon A. Lambert - TychoMUD     Internet:jlsysinc at ix.netcom.com /*\--
--/*\ Mud Server Developer's Page <http://www.netcom.com/~jlsysinc> /*\--
--/*\   "Everything that deceives may be said to enchant" - Plato   /*\--




More information about the MUD-Dev mailing list