[MUD-Dev] Respecting NPCs
tnixon at avalanchesoftware.com
Mon Oct 29 11:21:55 New Zealand Daylight Time 2001
From: "Michael Tresca" <talien at toast.net>
> This is the difference between:
> -PC wants to buy sword.
> -PC asks NPC about saber.
> -NPC response: "NPC doesn't know what you're talking about."
> -PC wants to buy sword. PC asks NPC about saber. NPC response:
> "I don't have a saber, do you want to buy armor?"
> -PC replies no.
> -NPC: "Do you want to buy a weapon?"
> -PC replies yes.
> -NPC: "Ah, I sell swords, staves, and daggers."
> -PC asks about swords.
> -NPC: "I have the following kinds of swords: falchion, long sword,
> short sword."
> -PC buys a different sword, realizing that a saber isn't in the
But you see, this is EXACTLY my point.
You will never get the second set of responses out of an alicebot
unless you directly program them in. Your alicebot would have to
have entries like this:
"Do you have a saber for sale?" -> "I don't have a saber, do you
want to buy armor?"
"What sorts of swords do you sell?" -> "I have the following kinds
The responses in an alicebot are all entered by hand. You can get
much more intelligent sounding responses in much easier ways. Even
keyword-based responses are generally more believable than what a
chatbot can produce.
Now, when I say chatbots are an evolutionary dead end, I'm only
talking about the current popular lines, the ones based on pattern
matching. That's the brute force approach to language, and I just
don't see it happening anytime soon. Hell, we can't even
brute-force chess yet.
> This is MUCH more satisfying than the dumb, "I don't know"
> response. One of the things I've noticed with ALICEBOT is that it
> makes an effort to get YOU to speak ITS language.
If your alicebot has the response, then you had to add it by hand.
It would be just as easy, and probably sound a lot less stupid, to
add the same response to a keyword-based NPC.
Alicebots are trying to solve, by brute force, what very well may be
an unsolvable problem, or at the very least, a problem that is
unsolvable until computers have the complexity of the human brain.
And when computers can emulate a human brain (be it decades from now
or millennia, I have no doubts whatsoever that it will happen
eventually) something like an alicebot will still be the wrong
approach, because that's not how we do it.
> You could do this for any merchant NPC, who, upon being asked
> about something it doesn't comprehend, responds with a series of
> what it CAN do, modified by its inventory, circumstances, etc. To
> make it world or city specific, other module subsets about a
> particular land, area, etc. could be added based on where that
> merchant is located.
Here you're not talking about an alicebot. You're talking about
something else entirely, something that relies on some sort of
knowledge base, something that knows the rules of the world, knows
what it has and what it can do with what it has. Here I agree
wholeheartedly. This is the direction we should be reaching. It's
just about as far away from an alicebot as you can get.
UNLESS you're back to talking about adding all the responses, all
the answers to specific questions by hand.
> Simple. Compartmentalized. Still more intelligent than, "I have
> this laundry list and if you don't ask the right key word I will
> not interact with you."
I'm starting to feel like a broken record, but you have just
described an alicebot. They just generally have a much larger set
of keywords they respond to. A larger laundry list, if you will.
And because I think I'm saying the same thing over and over, and not
really even finding new words to say it with, time to summarize,
with only slightly different language:
An alicebot IS a keyword based mechanism. Nothing more, nothing
less. It responds to nothing that its creator has not explicity
told it to respond to. And because it doesn't ever try to make
sense of the structure that its given, because it's a simple
pattern-matching machine, it will never be more than this. I just
happen to think that if you're relegated to using a keyword system
anyway, a full-blown alicebot is a waste of resources. You're not
fooling anybody (well you're not fooling anybody that's not trying
REALLY REALLY hard to be fooled, anyway), and you're just making
things harder on yourself.
All of my opinions on alicebots in particular, and AI in general,
are based on the assumption that the human brain is more than just
a pattern-matching machine. That it uses a deeper reasoning than
simple stimulus-response. That it applies simple rules to simple
knowledge to manufacture complex ideas. That behavioral
scientists are wrong, and game designers can't just treat us like
rats in mazes. :)
However, it could probably successfully be argued that I'm wrong,
and that the brain is just a complex pattern matcher, but of course
we don't just yet know the truth. If it does turn out that I'm
wrong, then an alicebot would seem to be the correct direction to be
going. I would be extremely disheartened to find that brute force
was indeed the most human-like way of doing things, but hey,
Even if I'm right, it could be argued that an alicebot, even if its
not the human approach to language, could still be useful, could
still approach something resembling intelligent conversation. All I
have to say here is that language is so incredibly important that
it's better to not imitate at all than to imitate poorly. I'd
rather have an NPC tell me it doesn't know what I'm talking about
than to guess that the sky is green when it is quite obviously blue.
(Keeping in mind that an alicebot cannot ever "know" what color the
sky is. It can only spit out a hand-made response to "What color is
the sky?") Well, unless the sky really is green, but even then I'd
rather have it know, not guess. :)
Some people feel the need to identify with the creator. I, on the
other hand, ever the programmer, just need to see that complex
problems have elegant solutions. :)
MUD-Dev mailing list
MUD-Dev at kanga.nu
More information about the MUD-Dev