I say. You say. We say.

Here is the way this project has gone so far, largely:

  1. “Yes! I’ve made enough progress with the code that I can actually start to add real content now!”
  2. “Hmm… well I tried adding some content, and some things work, but something I hadn’t considered has come up that isn’t right.”
  3. “Sigh… I could try to work around it, but I think I should make this work properly.”
  4. “Ok. The feature has been implemented or fixed, and a small amount of content has been added.”
  5. Go to 1

Yes! Hmm…. Sigh… Ok.

Yes! Hmm…. Sigh… Ok.

Yes! Hmm…. Sigh… Ok.

It just goes to show that you can’t create software in a vacuum. You have to actually use it in real world situations to evolve the design.

My latest “Hmm…” moment has just arrived. I have the beginnings of a scene where I hope to exercise (and, if lucky, actually prove viable) the response/topic/action design I’ve been working all this time to create. The scene has a few characters, who will all at some point engage in conversation.

The room has an opening paragraph. Just to start off simple, the bartender has a response that asks if the player would like a drink, and the loudmouth seated at the bar will make a political comment. When I enter the room, I get the following output:

<room opening paragraph>  The bartender says in your direction, “Can I get you something to drink?” “I tell you, the mayor is an idiot and a buffoon. I could do his job better than he does.”

Hmm…

The code is working as designed, but not as desired, in an ideal world. The problem arises from how normal prose is written, especially dialogue. Generally, when the the topic changes or when the speaker changes, a new paragraph is created. (In my mind, a speaker speaking is actually a strong prod toward “topic changes”, unless there is something binding the dialogue to the existing content.)

What I would want is more like this:

<room opening paragraph>

The bartender says in your direction, “Can I get you something to drink?”

“I tell you, the mayor is an idiot and a buffoon. I could do his job better than he does.”

( I do realize, by the way, that the second bit of dialogue needs something around it to identify the speaker. This is not the greatest content so far, but it is early days and first stabs.)

What the text output needs is context. (I have an entire blog post planned – and even started – touching on that subject.) The text outputting code needs to know both what has been output and what will be output in order to make intelligent formatting decisions connecting the two. The context is key. The text itself, at least in these cases, is largely irrelevant. But there are two key pieces of information necessary:

  1. What kind of output it is. In particular, in this case, we need to know if the output is dialogue or not. That implies at least two kinds of output. There may well be more.
  2. What character the output corresponds to. In this case, it would be which character is speaking the dialogue, but it’s also useful in terms of binding dialogue output to non-dialogue output for the same character.

We already have a mechanism in ResponsIF for associating content with characters – responses belong to responders, and responses are what “say” text. So if the content is authored in a reasonable way, where responders handle their own dialogue sequences, then we should generally who we’re talking about – or who is doing the talking. The responder is a key part of the context.

We also already have a mechanism for saying what a response “is”, which can be used to group responses for output.  This can be used for things like hints or other special text that lives alongside but separated from the main text. (My first thought was that that could work to separate the content. The problem is that these are actions across multiple responders, each being processed separately. As they are not processed together, the code does not know how to partition them since it never sees them all at once. In other words, it has no context.) Whether this existing mechanism is the one to use for this purpose remains to be seen. But either way, we need to attach metadata to the output.

A solution would be to have the output handler be aware of what has been output and what is being output, using the output type and character to nicely separate things. And the code to do that shouldn’t be that hard to write. What I often run into, though, is whether it makes sense to build such “business logic” into general purpose code. This will take more review, but I’m leaning in that direction. The more “smarts” that can be in ResponsIF out-of-the-box, the more people will enjoy and want to use it.

One way this can be implemented is via some sort of layer between the response processors and the output formatter. This layer would have the smarts to know what to do to the text to make it look correct. In theory, that layer could be swapped out, but what I hope to do instead is to expand it even more, allowing it to take part in the actual writing of the content in future situations.

In a game I was working on before, I had such a piece of code. I called it StoryText, a name I still like. (Of course, the vaporware cat is out of the bag here, and someone could easily steal that name.) In order to implement StoryText, we need to associate metadata with content, like the content “kind” mentioned above. And then write some code. In a reasonable way.

How exactly to slot this into place requires some thought.

Sigh…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.