[MUD-Dev] Event Scheduling
hhs at cbs.dtu.dk
Wed Feb 9 11:20:17 New Zealand Daylight Time 2000
On Tue, 8 Feb 2000, Cynbe ru Taren wrote:
> Hans-Henrik Staerfeldt wrote:
> > I once saw a lecture covering an eventqueue algorithm running O(log(l=
> > for insertions and O(1) for deletions. My guess would be that it is t=
> > implementation of the actual events that will take the time, even if =
> > use a O(log(n)) time event queue, or are my notions wrong?
> I would take that O(log(log(n)) with a grain of salt: It means that
> inserting and deleting N events gives you an O(log(log(n)) sort,
O(n log(log(n))), but i guess that was what you meant :-)
> If it were that easy, quicksort would be history. :)
It _is_ right :-). Furthermore, i saw the eventqueue run as a sorter=20
against quicksort, and the speedup _is_ there (for n>500 and random=20
values, or so). I'll try and go dig up the reference.
...digging... Whoha! here's the article, and it even has C source in it=
and some performance measures. BTW. this was constructed with Dijkstra's
shortest path in mind, so i guess you could use it there as well (where y=
would expect large n).
Other referenced papers, explains the proof of O(log log n) priority queu=
also implements a randomized O(log log n) version...
Check out http://www.diku.dk/users/mthorup/PAPERS/papers.html for
the algorithmics nerds. Many of the articles are on graph problems
that could relate to MUD's (connectivity, or shortest path).
> Usually it turns out there's a bit of fudging, pretending that
> radix sort is constant time, or some such, involved in such
> expressions, at least in my experience.
> Theoreticians like to cook up fanciful priority queues on a regular
> basis: They typically aren't anything for practicing programmers
> to get overly excited about. E.g., look up the "Calendar Queue",
> which promises both O(1) insert and O(1) delete times. It's basically
> a rolling hashtable with the slots corresponding to times, and the
> assumption that they hash uniformly enough to give the claimed
> O(1), no doubt with the infamous "probability 1".
Randomized algorithms are usually also eficient in practice. This however
> It may be apropos to point out yet again that it is almost always
> a mistake to start optimizing this sort of stuff heavily before
> you in fact have a demonstrated performance problem on live data,
> and have traced it to the datastructure/algorithm in question?
I totally concur. My guess is that it would be everything else that stole=
time, unless you have alot of events that are very fast to execute, i wou=
for a simple heap, worry about the event queue, only if my profiler said =
spending most time there.
Hans Henrik St=E6rfeldt | bombman at diku.dk | work: hhs at cbs.dtu.dk=
address: |___ +45 40383492 __|__ +45 45252425 =
Dybendalsvej 74 2. th, | Scientific programmer at Center for Biological =
2720 Vanl=F8se, Danmark. | Sequence Analysis, Technical University of D=
MUD-Dev maillist - MUD-Dev at kanga.nu
More information about the MUD-Dev