Meaning

Warning: There is this known phenomenon where, when certain words are repeated over and over, they cease to have any meaning and simply become odd sounds. If such a thing occurs in what is to follow, it is recommended to step away for a while until meaning returns.

I’d like to start out by asking a simple question. The reasons why will come up later, but I think the question in and of itself is interesting. The question is this:

What does the word “mean” mean?

Now, I know there are probably any number of people out there (like me) who will immediately point out that the word “mean” can have many different meanings. I’m really interested in the meaning of “mean” that has to do with meaning, though, not with with things like mathematical averages or people who aren’t very nice. So let’s restate the question in a way that is perhaps a bit more mind-bending but hopefully more precise:

What does the last word in this question mean?

Perhaps you are able to come up with a definitive answer immediately and can spit out verbatim a dictionary definition of the word “mean”. Perhaps you could come up with something after a bit of thought. Perhaps you weren’t able to come up with an answer at all. If I can be honest, I was a bit in that last camp myself.

Here is the thing, though: even if you had a tricky time coming up with an answer – or, in fact, couldn’t work up any answer – you still knew what the question was asking. It’s actually a fairly straightforward and common thing to do, to ask what words mean. This means that

even though you couldn’t come up with an answer in words to what “mean” means, you knew what it meant anyway. You just couldn’t express it using other words.

You understood the question, even though you couldn’t actually answer it, even though it was talking about a part of itself, which was something you understood.

Mind buzzing yet?

This raises something I realized at some point, which is that the meaning that we assign to words is not necessarily what is given in dictionary definitions. We typically don’t learn words by looking up their definition. I know that, personally, the vast majority of words I know I learned implicitly, through hearing or seeing them used in context. I know what those words mean, even if I can’t give you a dictionary definition of them. The word itself is how I describe that meaning. Maybe someone can give you its meaning in different words, but that doesn’t change the fact that I use that word to convey that meaning that I intend, because that is the word I use for its meaning.

Sometimes I will go look up a word that I have been using for a while (perhaps a long time) when it suddenly occurs to me that I may have been using it incorrectly all that time. What’s interesting is that usually my internalized meaning of the word agrees to a great extent with what the dictionary definition says, even though I never looked up the word before or had it explained to me in words. I was able to pick up on its meaning contextually.

Let’s add something else into the mix:

What is the meaning of the word “meaning”?

I don’t expect you to work up that one yourself. That one feels even deeper, and when I went to look up its dictionary definition, I discovered there were aspects to it that hadn’t made it into the sort of implicitly derived meaning I had assigned to it. The first definition I came across was this:

what is meant by a word, text, concept, or action.

Ok, well that’s not very useful. It relies on the word “meant”, which is tied up in “meaning” anyway. It’s sort of self-referencing. (More on that sort of thing later as well.)

So I found another, more interesting definition:

the thing one intends to convey especially by language

There are other variants of that that expand more, noting it might not be language as such. You could be using signs, symbols, sounds, gestures, or anything else that communicates.

What I find interesting about this is the “intends to convey” part – we have this thing inside us that we want to communicate to someone else, and we do that with words, gestures, etc. What that implies is that the words themselves are not the meaning. The words are a mechanism for conveying meaning, but they aren’t the meaning themselves.

Ironically, the answer to our question brings us back to the question: if meaning is this thing inside us that we wish to convey, and the words or whatever are just a means to do that, what the hell is meaning to begin with? It is what we intend to convey, but what does that actually mean? What is it that we translate into words or gestures or inarticulate sounds?

One answer might be that the words themselves are the meaning. There are any number of people out there who think that we think in words. I don’t know about them, but I certainly don’t, at least not in the serial, one-word-after-another fashion that I experience when reading or speaking. My brain makes connections much more quickly than that, wordlessly. And, for example, when I’m having a conversation with someone, as I’m listening to them speak, what I want to say next will come to me suddenly, in an instant, before I ever formulate words either out loud or in my mind. I know what I’m going to say, on some level, before I say it.

If you assume that words themselves are their meaning, then you run into an interesting problem: when you try to get to the heart of the meaning of any particular word (excluding for the moment tangible items, which you can sort of point to and say, “that”), you end up in this sort of circular regress: this word is defined by those words, but then what defines those words, and it’s other words which are then defined in terms of more words, or the same words, and it goes on and on. You never get anywhere. There is no foundation. It never bottoms out. You would need to have foundational or simpler words that the other build on… but then what defines their meaning?

And I think it’s born out in normal life: we don’t have an internal dictionary definition for all the words we use. We assign meaning to words and use those words to convey that meaning, but we don’t necessarily have to have, on hand, a knowledge of other words that we can use to express the same thing.

I wish that I could go back to before I knew any words, and to experience what it’s like to not only learn new words but new meanings… if that’s even the right word for what I mean. I don’t mean simply new arrangements of existing ideas. I mean learning something entirely new.

I recently subscribed to one of those “word a day” websites, because I thought it would be interesting to see what I could learn. Unfortunately, the first week was “portmanteau” week (where a portmanteau is a word blending together parts of other words, like “brunch” or “motel”), so it was just combining existing concepts into a new word. Nothing earth-shattering there. Later one of the words offered was “proparoxytone”. A rather imposing word. But it just means “Having stress on the third-from-the-last syllable.” Ok, so somebody made up a word for that. But that’s not really a new idea; it’s more a combining and refining of an existing idea. Words have syllables, syllables have accents, and we know how to count: combine them a bit. Nothing mind-expanding there.

But there was a time when I hadn’t heard the word “mean” before or even had an idea about what it meant. There was a time when I hadn’t encountered “love” or “justice”. We start off easy with things like “apple” and “block” and “hungry” and “mama, I believe I have soiled myself.” Those are tangible things, where we can know what someone means by the word, because it’s something right there, in the real world. Somehow, though, we pick up on more abstract ideas, and all just through context, by taking in how other people use them.

I wonder if we accumulate actual low-level meanings like that, or whether somehow – like language syntax – we have meanings built into our brains, just waiting for words to come along for us to make the connection and learn how to express. Ah, to go back in time, and undo all I know (except this question, of course)…

So why am I going into this? Meaning is something I have pondered often in my work as a computer programmer.

First, we can leverage a computer for anything that we can express in a way such that the computer can act on it. And I have wanted to create games or simulations with characters that could interact with the player, the virtual world within which they live, and each other. I wanted them to have conversations, to gossip, to pass knowledge around. But how do you represent that? How do you encode “meaning” in a computer? At least, can you do so in a (for lack of a better word) meaningful way?

I could simply dump a dictionary into the computer’s data structures. That would be all the words I know and a lot I don’t. But the words aren’t the meanings… Even if the computer could cross reference words with other words to try to come up with definitions, all the words just point to all the other words.

Meaning lies elsewhere.

Even knowing how to construct sentences… you not only have to know what words mean but also what role they play in sentences. All of this is wildly beyond my ability to comprehend.

(Google’s recent LaMDA AI chatbot raises an interesting issue. By using these incredibly large neural nets that train themselves, we may eventually have created consciousness and intelligence without understanding at all what consciousness and intelligence actually are. Buried in the mass of connections… And how will we know if a bit of software actually truly understands the real meaning of something? The “Chinese Room” thought experiment goes into that in an interesting way.)

The other aspect of meaning for me a computer programmer is a bit more down-to-earth: as someone writing code that others will want and need to understand, how do I express the ideas I have in my head in code in such a way that both the computer does well with them and a human being will be able to understand them? We often get hung up on syntactic or structural aspects of the code. Apart from the difficulty in naming things (and except for those times when someone just comes right out and says “this is confusing” or “I don’t get what’s going on), we don’t really emphasize meaning much, at least not in any sort of strong way. We generally want the code to be understandable, but we have very little that guides us in terms of making the code meaningful (in a comprehension sense, not a “higher purpose” sense), in terms of being able to effectively express what we intend to express.

And that, after all, is what meaning is.

So, I’ll continue to explore the idea of “meaning”. I may never get anywhere. It does have the advantage, though, of being something that’s going on inside me in a personal way, which is always a nice property to have.

(Post thought: I’m really interested in the idea about whether there can be new ideas or concepts to learn, on a fundamental level, for someone who is my age. Not repurposing or restructuring or refinement of existing ideas, but entirely new concepts, as we encountered when we were those “blank slate” babies. I’d be very happy to entertain any ideas people have about this. I suspect if so, it will be realms like mathematics, philosophy or something else that involves things existing in the mindscape.)

Excitement

Some things that I have found that give me that mental buzz:

  1. Making a connection between things that I hadn’t seen as being connected before.
  2. Making a distinction between things that I hadn’t seen as separate before.

Both of these open up new ways of looking at things, and that can be quite exciting!

Everything I Know About Life I Learned from Programming Computers

I have been writing software for a long time. I have learned a lot in that endeavor that I realized applied to life in general. Maybe this will be of interest to someone.

  • Just because something’s logical doesn’t mean it’s right or true. For any given amount of information, there are typically a number of possible explanations that make sense. But only one is actually true.
  • The more information you get, the closer you get to the truth. Diving deeper into something helps you to narrow the number of possibilities, refining your understanding.
  • Don’t trust your first instinct about what something is or means. Sometimes your first thought is right – you get lucky sometimes. But if you assume your first, uninformed opinion is the right one, it can prevent you from seeing what the truth actually is.
  • “Good” and “bad” are always subjective. Even when your metric is objective, the assignment of that metric to “good” or “bad” is subjective. It’s better to talk about the metrics themselves. Or just not get into it to begin with.
  • The larger the system, the more chaos creeps in. Breaking things down into small pieces that work doesn’t mean the larger thing will work. You have layers built on top of layers. Complex things are chaotic by nature. And people are inherently complex.
  • You can spend weeks or months planning only to have to modify or throw away your plans as soon as you make your first connection to something real. What you think is going to work might not be what actually will work. Sometimes it’s better to make some explorations, to confirm or discover what is actually real.
  • Imagination is a poor substitute for reality. I have learned to doubt, on at least some level, my first impression of anything. If you want to know what is really going on, you have to get out of your head.
  • Documentation is a poor substitute for reality. Just because someone wrote something down doesn’t mean it’s true.
  • Things aren’t always going to be fun. Sometimes you have to do things just because you have to do them. That’s just life.
  • Things aren’t always going to be easy. There will be challenges you never could have imagined. It’s in the facing of those challenges that we learn about ourselves and grow.
  • Don’t give up right away. Persistence can pay off. Sometimes the difference between success and failure is the fact that you didn’t stop trying.
  • Sometimes the best approach to a problem is to stop, back up, and look at it from a different point of view. It’s easy to get stuck in what you perceive is “the right approach”. But that could just be you running into the same wall over and over again.
  • Sometimes, the best approach to a problem is to talk to someone else. Talking out a problem can help you gain insight. And sometimes the other person just happens to know the answer – they have gone through what you have gone through.
  • Nobody can do everything. You get really good results from a group of people with different skill sets all working together. Every person can play a part, based on what they bring to the party. The trick to having things go well is to look at what people can do, not what you think they should do.
  • Communication is a dark art. No matter how much you might like to think that you’re being obvious and clear, the proof is in whether someone else actually thinks so as well. And if they don’t understand you, it’s not necessarily their fault… or yours. Communication is an interactive, two-way street. And people can differ even about what individual words actually mean, sometimes on subtle levels. To be effective, you have to listen as much as speak.
  • What matters is what a person does, not what a person is. Titles and status mean nothing, at the end of the day. Anyone can contribute. Anyone can create something amazing and worthwhile. Anyone can change the world.

Wait… What?

Ah yes. Definitions.

Who: “What person or people”

When: “At what time”

Where: “In what place or at what location”

Why: “For what reason or purpose”

How: “In what way or manner”

Hmm… Those all use “what”. Let’s go see what’s “what”.

What: “used as an interrogative expressing inquiry about…”

Thanks. That clarifies things. We don’t really know what “what” is, but we do know how it’s used.

So the next time someone asks you, “Which is the odd one out of ‘who’, ‘what’, ‘when’, ‘why’, etc.”, now you can tell them.

Further Thoughts on The Three Bears as Code

After writing and uploading the previous post about a computer programmer stylized rendition of “The Three Bears” (https://www.aniamosity.net/if-authors-wrote-stories-the-way-programmers-write-code/), I spent a good deal of time reflecting on what I had done. And there ended up being some interesting aspects to the process I went through that might cast some light on what we do as programmers when writing and refactoring code.

So I wanted to dive into that a bit…

The first question might be, “What was the process you used to arrive at that?” And there were different aspects to that.

High-level Structure

The initial step was to look at the overall story and see what the meaningful chunks were. It ended up being roughly along paragraph boundaries, but not exactly. In fact, the initial paragraph made more sense to split into two, semantically, since they’re actually about different aspects of the story. (That could be considered a “bug” in the original story’s use of paragraphs.)

I actually think the high-level steps in Story give a pretty good overall sense of the progress of the story – that is, if you know what they mean. So that can be useful in computer code as well: by pushing lower-level details down into functions, you can allow someone to get a good sense of what a function is doing at a high level.

This raises an interesting point, which I hadn’t thought of before:

It’s easier to understand the lower level details when you know where they fit into the higher level structure.

Someone who has read The Three Bears, for example, can know exactly where this fits in:

define Sitting_In_My_Chair:
    Someone's been sitting in my chair

They can see that small piece and understand its role in the overall story.

There is the counterpart to that as well:

It’s easier to understand the higher level structure when you know what the lower-level details do and where they fit.

Something like the overall structure of Story is only as clear as the step names can offer – and you can only put so much information into a name. That is one of the problems I have with the idea of “self-documenting code” as a sort of excuse for having virtually no comments in code: identifiers are of necessity limited. They can only contain and convey so much information.

However, once you know what they mean, then they can be good shorthand for things. Once I know what happens in Girl_Chairs, for example, I can just look at it as “the part where she interacts with the chairs”, and if I later want to find where the bears discover she has eaten the porridge, I can quickly jump to Bears_Food – once I know that that’s where it is. On the other side, if I know the overall arc of the Three Bears story, I could probably jump right there even if I had never seen this particular “code” before. I can map what’s in my head onto the story’s structure.

You can move your level up and down within the code. I think it could be argued that decomposition works best when the view level of the code goes down as you go down into sub-pieces and vice versa. Beware of decomposition where the result is actually at the same level. That can point to an arbitrary creation of concepts rather than a refinement of concepts.

Extracting Common Constructs

Moving on, another aspect of the “codifying” was to extract some constants from the code. Now, that wasn’t necessary, but it can have advantages later. It might be a bit silly to generalize “porridge” as {Food} or to allow the name of the girl to not be “Goldilocks” – “Ms. Locks”, perhaps? On the other hand, I have seen the bears named “Papa Bear” and “Mama Bear” instead “Daddy Bear” and “Mummy Bear”. By having constants outside of the main body that can be changed, then all of their references will change automatically, if so desired.

The structure of the story is the same, but minor details can be easily changed, on a whim.

This is the first part of what is typically referred to in software development circles as “DRY” or “Don’t Repeat Yourself”. Consolidating repeated values like names into overarching constants or variables (or doing so with bits of code into common functions) offers at least two advantages:
1) You can easily change the value of all instances of one of them at once by changing the higher-level definition.
2) By making them all refer to the same thing (for example, Girl for “Goldilocks”), you are saying, “These things are all the same.” That might seem obvious in this case, but there will be cases where that isn’t true. Having that additional clue when looking at the code makes it easier work with, because you know what is meant to be the same and what isn’t.

When refactoring, we need to differentiate between things that happen to be the same and things that actually are the same, especially when we consider coding them as the same thing.

Consider, for example, my injection of Bear_Scene to replace the three repetitive bear sections. On the surface, it seems reasonable: if you look at those sections, they are basically the same as each other, structurally, with just some minor differences in wording. However, I made a mechanical decision, which is that I would make them all be expressions of the same pattern simply because it worked to do so. I really don’t know if the author deliberately intended that they would be the same or should be the same or if it just worked out that way. In other words, I don’t know if the pattern I ascribed to them is a deliberate pattern or just something accidental.

That might seem like a very nuanced (and maybe pointless) point, as the code works, but when you’re working with software, the distinction in semantics can become important if things need to change later. By forcing the text to fit the pattern (and I sneakily did that by changing Daddy_Bear’s dialogue tag from “growled” to “said” in the chair section to make it fit – does that violate the requirements?), it then becomes much more difficult later to change things if, for example, we need to add an additional line into one case but not the others.

The pattern works while it’s a pattern. But if things need to change in one case, then the question becomes, “Do I need to remove this case from the pattern, or do I need to extend the pattern to cover this varying case?” And you can typically do it either way, though if you do the latter too much, it can lead to horribly complicated code with lots of exceptions and variability, trying to account for variations in a pattern that might not actually be a pattern anymore.

This is where it really helps to understand what the code actually means. But we can’t always have that insight, especially when it’s code written by others.

Objectifying the Bears

After some initial breaking down of what varied in the various scenes, I discovered I had a number of constants like “Daddy_Chair_State”, “Mummy_Chair_State”, “Daddy_Chair_Size”, etc. where all three bears had the same set, and I had unique calling cases for each bear. At that point, I saw I could invert things a bit by dividing and consolidating the constants into structures, one for each bear. Then the other chunks could look at which bear was in play and use its values. I could just pass the bear around instead of the values within, and the underlying chunk could pick out the part it needed.

So “Daddy_Chair_State”, for example, became “bear.chair_state”, where “bear” could be one of the Daddy_Bear, Mummy_Bear or Baby_Bear “things”.

This isn’t really “object oriented”, in that there is neither encapsulation nor even any inheritance. It’s really more “structured data”. In fact, I made a point of using “thing” instead of “object” (which had connotations) or “struct” or “structure” (which sounded techie and even language specific).

There is possibly more that could be done along those lines. But then, there’s a limit to the gains you make, and doing too much can lead to code being harder to understand, even if it “works”.

This leads us to some of the difficulties I noticed during this exercise.

The Difficulties with Compression

As I mentioned before, it’s easier to understand the lower-level pieces when you know where they fit into the higher-level structure. That is one reason why the person writing the code is in a better place to understand the decomposed, semantically compressed code, as they (at least when they wrote it) have the full picture in their mind of what it all means and where it all fits together.

Someone coming onto the code for the first time won’t have that advantage. And that is something I think we need to be aware of as programmers: that someone else won’t have the same mindset that we do, even if “someone else” is us 5 years down the road. (Though, in all fairness, I tend to find it easier to get back into a mindset I once had, even if I’m not in it at first when encountering old code.) It might make sense for us as the all-knowing programmer to keep breaking the code down into smaller and smaller pieces, as we know how they all fit and – more importantly – what they all mean. But someone else won’t, at least not at first. At that makes the code harder to understand, if the pieces become so small that they have little semantic information on their own, or if the divisions are along syntactic lines rather than semantic lines, where it becomes hard to work out what something actually means.

Take, for example, the Said_Food_Is chunk. That is exactly one line, and it’s used in exactly one place. That came into being because I originally replaced a few separate lines with that (doing a sort of textual replacement), and then later when I compressed the resulting structures using those lines into one thing, it became a single instance again.

The question is, “Is this chunk useful or does it make the code harder to understand?” Initially, it had a use, as it replaced several common sections. But I would postulate that, now that it’s back to a single use, not only does it not serve a purpose, but it makes the code harder to understand, as the name for it doesn’t add any useful information and it’s just another level of indirection. It becomes just another concept to have to deal with when understanding what is happening. The decomposition has gone too far.

What’s interesting to consider is how “helpful” decompositions differ from “harmful” decompositions. If you look back at the original Story breakdown, it felt “helpful” because it allowed us to operate at a higher level and gain an understanding of the code at that level, without having to plow through all the low-level details. It actually added information, by providing a structure that we might not have noticed otherwise. However, the Said_Food_Is chunk doesn’t have that benefit – it doesn’t take us up or down levels. It’s just a replacement with no value. It is introducing an extra step to go through, but it doesn’t offer any additional insight, whether it be structural or “these things are all the same”, which is what you get when replacing things used in multiple places. It’s barely a separate thought, and yet it’s trying to be one.

The Difficulties with Abstraction

I wanted to look at one more chunk, which is the Bear_Scene one. This is really a template to be filled in. And it works for what it needs to do. However, if you were to hand that to someone outside of the context of this code, it would be hard to get a good sense of it. I mean, you could see what it does, but you may not know exactly what it means. And this is something I have noticed often in code, which isn’t a 100% generality, but it happens often enough to make it worth watching out for:

While it can feel good to find patterns and generalize the code through common abstractions for those patterns, abstractions tend to be harder to initially grasp than concrete code.

Again, I wouldn’t say it’s generally true. Things like templated or generic containers, for example, have good semantics that make immediate sense. However, other abstractions – especially if they don’t have a unifying concept behind them – can be harder to grasp until they can be placed into context so their usage can be seen. We can extract the pattern out, but not all patterns have good semantics outside of the code that uses them, which would allow them to stand on their own in our minds.

If Authors Wrote Stories the Way Programmers Write Code

There is a saying in coding circles that code should read like well-written prose. While on the surface this sounds like an admirable goal, we’re actually taught to break down our code, which can lead to it being hard to follow and understand if done to too fine a detail. Its interesting to compare that sort of writing to actual prose.

The following is a made up example of the other way around: what prose would look like if written like typical code. The main point of this is that code readability is something worth thinking about – and that maybe the automatic decomposition of code into smaller and smaller units may not always be the best thing to do, especially if we are interested in having the code be able to be easily understood. At some point, we begin to lose a lot of the context and coherence that allows us to maintain it properly in our minds.

I don’t really have hard-and-fast rules for when to break down or not, but it’s something I have been keeping firmly in mind when writing code lately. I think it’s worth thinking about and exploring in different ways to see what actually works best.

The story is “Goldilocks and the Three Bears”, with text taken from this website: https://www.wardleyce.co.uk/serve_file/699125

I fixed one error in the text and normalized some of it to keep the code from getting too special-case. There may actually be bugs in this, as it’s not something that can actually be run.

Enjoy!

=================================================================

{Story}         # Execute the story!

define Story:
    {Intro}
    {Girl_Enters}
    {Girl_Food}
    {Girl_Chairs}
    {Girl_Beds}
    {Bears_Enter}
    {Bears_Food}
    {Bears_Chairs}
    {Bears_Bed}
    {Girl_Exits}

constant Girl: "Goldilocks"
constant Food: "porridge"

constant Just_Right: "just right."

constant Daddy_Size: "big"
constant Daddy_Bowl_Size: "large"  # fix this inconsistency?
constant Mummy_Size: "medium"
constant Baby_Size: "small"

thing Daddy_Bear: [
    name: "Daddy Bear",
    bowl_size: Daddy_Bowl_Size,
    food_state: "too salty!",
    chair_size: Daddy_Size,
    chair_state: "too big!",
    bed_size: Daddy_Size,
    bed_state: "too hard!"
]

thing Mummy_Bear: [
    name: "Mummy Bear",
    bowl_size: Mummy_Size,
    food_state: "too sweet!",
    chair_size: Mummy_Size,
    chair_state: "too big, too!",
    bed_size: Mummy_Size,
    bed_state: "too soft!"
]

thing Baby_Bear: [
    name: "Baby Bear",
    bowl_size: Baby_Size,
    food_state: Just_Right,
    chair_size: Baby_Size,
    chair_state: Just_Right,
    bed_size: Baby_Size,
    bed_state: Just_Right
]

define Eating_My_Food:
    Someone's been eating my {Food}

define Sitting_In_My_Chair:
    Someone's been sitting in my chair

define Sleeping_In_My_Bed:
    Someone's been sleeping in my bed

define Intro:
    Once upon a time there lived three bears and a little girl called {Girl}.

define Girl_Enters:
    One day, she saw a house and went inside.
    {{break}}

define Girl_Porridge: 
    She saw some {Food}.
    {{break}}
    {Tasted_Bowl_And_Commented(Daddy_Bear)}
    {Tasted_Bowl_And_Commented(Mummy_Bear)}
    {Tasted_Bowl_And_Commented(Baby_Bear)} She ate it all up.
    {{break}}

define Girl_Chairs:
    {Girl} saw three chairs.
    {{break}}
    {Sat_In_Chair(Daddy_Bear.chair_size)}. “{Chair_Is(Daddy_Bear.chair_state)}” she said.
    {Sat_In_Chair(Mummy_Bear.chair_size)}. “{Chair_Is(Mummy_Bear.chair_state)}” she said.
    {Sat_In_Chair(Baby_Bear.chair_size)} and said, “{Chair_Is(Baby_Bear.chair_state)}” Then it broke.
    {{break}}

define Girl_Beds:
    {Girl} went upstairs.
    {{break}}
    {Lay_Down_On_Bed(Daddy_Bear)}
    {Lay_Down_On_Bed(Mummy_Bear)}
    {Lay_Down_On_Bed(Baby_Bear)} She fell asleep.
    {{break}}

define Bears_Enter:
    The Three Bears came home.
    {{break}}

define Bears_Porridge:
    {Bear_Scene({Eating_My_Food}, "it's all gone")}

define Bears_Chairs:
    {Bear_Scene({Sitting_In_My_Chair}, "it's broken")}

define Bears_Beds:
    They went upstairs.
    {{break}}
    {Bear_Scene({Sleeping_In_My_Bed}, "she's still there")}

define Girl_Exits:
    {Girl} woke up and screamed. She ran away and never went back into the woods again.

define Tasted_Bowl_And_Commented(bear):
    {Tasted_Bowl(bear.bowl_size)} and {Said_Food_Is(bear.food_state)}

define Tasted_Bowl(size):
    She tasted the {size} bowl

define Said_Food_Is(food_state):
    said, “This {food} is {food_state}”

define Sat_In_Chair(chair_size):
    She sat in the {chair_size} chair

define Chair_Is(chair_state):
    This chair is {chair_state}

define Lay_Down_On_Bed(bear):
    She lay down on the {bear.bed_size} bed and said, “This bed is {bear.bed_state}”

define Bear_Scene(each_said, baby_added):
    “{each_said},” said {Daddy_Bear.name}.
    “{each_said},” said {Mummy_Bear.name}.
    “{each_said}, and {baby_added}!” cried {Baby_Bear.name}.
    {{break}}

NPC Goals

When I first started using Quest, I had an idea for a game that I called “What Will Be”. It was loosely based on a story I had started writing but never finished, involving a group of people brought together by government forces for (initially) unknown purposes. It was going to be a parser game, and I wanted it to have multiple autonomous NPCs. It was during my attempt to create the infrastructure for this game that I first came up with the idea of “goals”.

A “goal” is conceptually similar to its real life counterpart, though expressed in terms of the game world: a goal is, roughly speaking, a desired world state. Perhaps it’s an NPC wanting to be somewhere. Perhaps it’s a door being opened or an object given. The idea would have to be extended to more internal things as well (e.g. speaking to another character with the goal of conveying information), but I figured I’d get to that once I got the more mundane situations out of the way. Trying to bite off too much at once can lead to either indecision or madness.

I chose some initial goal situations to implement. They were these:

  1. An elevator
  2. NPCs getting from point A to point B in the game world (a three story building, in this case), including riding the elevator and using key cards to enter rooms.
  3. An initial scene where an NPC leads the PC to a meeting.

With respect to number 1, I seem to have this thing for elevators. Perhaps it’s because they have straightforward, well-defined behavior but with multiple parts (e.g. the car itself, buttons, doors, lights). And NPCs moving around and pursuing agendas was something I really wanted as well.

My first stab at code for goals had a form which I realize now was incorrect. I’ll briefly describe it and then get into where that led me, which is to where I am today.

A goal had three main pieces:

  1. a “try” action,
  2. an “achieve” action, and
  3. code to work out whether either of those was possible (Can the goal be achieved? Can the world be changed – can I try – in order to create the conditions where the goal can be achieved?)

If the necessary conditions for a goal existed, then the goal could be achieved. A goal had behavior when the goal was achieved. It might be an NPC transitioning to a new room. It might be some other change in world state.

If the world conditions were not such that the goal could be achieved, then there was code to try to get the world into that state. And the “try” section had conditions as well.

Let’s give an example.

An NPC wishing to enter the elevator would acquire an “enter elevator” goal. The conditions for entering the elevator were that the NPC had to be in the elevator foyer, and the elevator doors had to be open. In that case, with those conditions satisfied, the “achieve” action moved the NPC into the elevator car.

If the doors were not open (but the NPC was in the elevator foyer), the NPC had an action to try to change the world to achieve the goal: pushing the elevator button, if it wasn’t already lit up.

So we have this:

  • achieve condition = NPC in foyer and door open
  • achieve behavior = NPC moves into elevator
  • try condition: NPC in foyer and elevator button not pressed yet
  • try behavior: press elevator button

If the NPC was in the foyer and the button was already pressed, the NPC had nothing to do. It effectively “waited”. Once the elevator showed up and the doors opened, the NPC could achieve its goal by entering the elevator.

The elevator itself had two goals: “close door” and “arrive at floor”. The close door goal’s achieve behavior was to close the elevator doors. The one for the “arrive at floor” goal was to open them. So they were mutually exclusive goals, with mutually exclusive conditions. The “try” action for “close door” was to count down a timer set when the doors had opened. When it reached zero, the doors could be closed. The “try” behavior for the “arrive at floor” goal was to move the elevator to a floor that has been requested by an NPC or PC.

If the elevator doors were closed and no buttons were pressed (either inside or outside the elevator), it did nothing.

The initial “lead player” sequence was a complex mix of path following (both to the player and to the target room) as well some canned dialogue meant to coax the player to follow. There was also a “hold meeting” goal sequence, which was really canned and really unsatisfying to me.

What I found most unworkable about this method of doing goals was the need to manually string them together. For example, any path following (move from A to B) was explicitly programmed. There was nothing in the NPC that decided on a room or worked out how to get there. Plus, I wanted it to be possible to “interrupt” an NPC’s goal chasing. They might be heading to their room, but if you started talking to them, I wanted that goal to be put on hold (if it wasn’t too pressing) to take part in the conversation, with moving toward their room to resume once the conversation was over – unless some other more pressing goal had come up. The key here is that each step along the way in path following needed to be its own goal, to be evaluated and next steps considered at each turn.

To the extent that it worked, it worked nicely. But something wasn’t right with it.

Fast forward to my work with ResponsIF, and I found myself once again trying to implement an elevator. For one thing, I already had done it in Quest, so it was a sort of known quantity. The other was that if I couldn’t implement that, then I probably couldn’t implement much of anything I wanted to do.

Right away, I ran into the same problem I had had before with the Quest “goal” code: I was having to program every little detail and hook everything together. There was no way to connect goals.

After much thought, I had a sort of epiphany. Not only did I realize what needed to be done, I also realized why that original goal code seemed awkward.

First the original code’s flaw: the “try” and “achieve” sections were actually two separate goals! For example, the “enter elevator” goal included not only that goal but the goal that immediately preceded it. In order to enter the elevator (the desired state being the NPC in the elevator), the doors had to be open. But the doors being open is also a world state! And the “try” code was attempting to set that state. Strictly speaking, they should be two separate goals, chained together. I had unconsciously realized their connection, but I had implemented it in the wrong way. And that left me unable to chain anything else together, except in a manual way.

In this case, we have a goal (be inside the elevator) with two world state requirements: the NPC needs to be in the foyer, and the door needs to be open. Each of these is a goal (world state condition) in its own right, with its own requirements. In order for the NPC to be in the foyer, it must move there. In order for the doors to be open, the button must be pressed. I’ll break this down a bit in a followup post, to keep this one from getting too large.

So what needs to be done?

What needs to be done is to connect the “needs” of a goal (or, more specifically, the action that satisfies a goal) with the outputs of other actions. We need to know what world state an action changes. And there is where we run into a problem.

“Needs” in ResponsIF are just expressions that are evaluated against the world state. The game designer writes them in a way that reads naturally (e.g. ‘.needs state=”open”’), but they are strictly functional. They are parsed with the intent of evaluating them. There is no higher level view of them in a semantic sense.

In order to have a true goal-solving system, we need to know 1) what world state will satisfy goals, and 2) what world state other goal actions cause. The goal processing methodology then is, roughly, to find other goals that satisfy the goal in question. Then we iterate or recurse: what conditions do those goals need? Hopefully, by working things back enough, we can find actions that satisfy some of the subgoals which are actually able to be processed.

It’s a bit more complex than that, but the first coding addition needed is clear: we have to be able to hook up the effects of actions with the needs of other actions in a way that the code can do meaningful comparisons and searches and make connections. We need to be able to chain them together. Once we have a way to do that, then the code can do itself what I had been doing by hand before – creating sequences of goals and actions to solve problems and bring to a life a game’s overall design.

Choose Your Own Attraction

I feel the need to preface this with a bit of a warning: on occasion, I will delve into topics that some might consider “out there” or even “kooky”. Others won’t find them such, but some will, and I would hate for the overall point to be lost in such cases due to guilt by association. What I would ask the reader to keep in mind is that I just enjoy exploring ideas to see where they lead, traipsing along twisty paths and down blind alleys, passing through arches leading to fields of violent purple, without necessarily attributing reality to any of them in particular. So if your take on “reality” is such that even contemplating other formulations causes you any emotional or psychic stress, feel free to (as the song says) “walk on by” – though there will be an interactive fiction connection before we’re through.

Having gotten that out of the way…

A number of years ago, I came across a book called “The Secret”. The underlying principle of this book is something called “the law of attraction”, which states (more or less) that you can draw (attract) the things to you that you want in your life simply by “making a wish to the universe” and then basically acting as if you already have them. In other words, what you imagine becomes real. But you have to, basically, believe and, then, be grateful.

This idea isn’t that new. I had come across it before in the writings of Richard Bach, just without all the hype.

There are some things interesting about this. Taken from a broader vantage, it’s more or less a secular equivalent of prayer. Ask and ye shall receive. If you view that all as nonsense, then the ships sink together. If you have any affinity for it, then it could be hinting at a broader truth that nobody has quite gotten right yet. It also means that those people who don’t like contemplating negative things for fear it will “bring then on” may have been right all along.

There were two things about the treatment of this in “The Secret” that didn’t sit well with me, though. Not to say that I was on board otherwise, but these were two that really stood out.

First, there is the problem with competing wishes. What happens if two people send out cosmic requests that conflict? I want rain, and you want sun. Sally wants Tommy, but Lisa does as well. How is it possible for us all to get what we want? And yet “The Secret” says not to worry, you can all have what you want. Just keep positive thoughts! Something doesn’t add up. (Bill Cosby has a great bit where he’s describing all the people in a casino sending requests up to God, wanting a seven in a dice roll or the next card to be a queen, on and on. And of course, God can’t deal with all these requests coming up at once, so God just goes, “BUST EVERYBODY!” )

Second, there was this notion that wasn’t even hinted at but was explicitly stated: all the bad things that happen to you or have happened to you in your life are, basically, your fault. You weren’t thinking positively. You drew them to yourself. If you’re in a car crash and wind up in a wheelchair for life, it’s your fault. If a loved one dies and you’re left inconsolable, it’s your fault. If the money you always wanted and worked hard for all those years never came through, well, then it’s just because there’s some part of your psyche that’s undermining you and keeping you from achieving the results you want. And from what I’ve seen, an entire industry has popped up around this where, if you simply max out your credit cards and give all your money to these certain people, then they will help you get past what is keeping you from prosperity. What’s a little money spent when you’ll be reaping the results later? And you have to prove to the Universe that you’re serious… Run away as fast as you can.

I’m going into this because, being the kind of person I am, I began trying to reconcile some of what I had read and make it all make sense. Again, not that it’s reality, but that it could at least be made to be consistent. And this is where another concept came in, one I have toyed with in both my thoughts and my writing in the past (and seen in various sci-fi movies and television shows), and which, for me, seemed to resolve the major issues into a nice neat little package.

The idea is one from physics, that of multiple universes. The basic idea behind this is that when a quantum event can go one way or the other, it ends up going both. The wave equation may collapse in a particular way in your Universe, but there’s another where it went the other way. All things that can happen, do happen, in an ever-growing infinity of universes,

I was pleasantly surprised recently to discover this concept in the Myst universe, expressed in the D’ni concept of the “Great Tree of Possibilities”. This ever branching set of Universes (Universi?) can be viewed as the branches of a great tree. And by writing a “descriptive book”, you create a connection to one of those possibilities. I used the tree metaphor in one of my own stories once.

To me, this concept solved the issues I had with “The Secret”. Interestingly enough, I came across a book a little while ago, where the author was more or less espousing what I had worked out in my head. (You don’t need to look for it.) So maybe I’m not that original, eh?

The concept of the multi-verse solves the problems with “The Secret” by turning the law of attraction on its head. It’s not so much that you attract things to you in this world that you want – it’s that you draw yourself to the Universe you imagine where the things you want exist for you, along the ever branching pathways of infinite possibility. As such, there is no possibility of conflicting desires – contrary wishes simply end up in different Universes.

(An aside: the multiple universe theory in general has a whole bunch of hairy – and interesting – ramifications having to do with “which one is really me?” It might not seem so shocking when Spock has a beard or Worf is married to Deanna, but when we contemplate it for ourselves, it suddenly becomes more personal. Because if I get what I want in universe A, and she gets what she wants in universe B, then there are still a “her” and “me” in those universes that didn’t get what we wanted. But that’s ok, because I’m looking out through the eyes of the one who did get what he wanted, and… It’s all a bit complicated. The same issue comes up in a different form if you ponder what it would mean if the state of your brain could be transferred into a machine. How do you define yourself? Are you more than just patterns in a substrate? And yet, if who I am fundamentally as “me” is through memory, if my sense of identity comes largely from what I know of my past, then at the end of such a transfer, what ends up would be “me”, as far as it was concerned.)

Back to the main thrust: For the second concern, if we view it as you bounce more or less randomly along the quantum pathways unless you explicitly direct yourself, then the bad things that happen to you are really due to the random nature of the universe. You can take steps to avoid them by consciously navigating the quantum possibilities, but you don’t necessarily take the blame for all the bad that has come into your life as being an explicit manifestation of wrong thinking.

So after all this, you might be wondering what this has to do with IF. Perhaps it has already taken form in your mind…

The IF sub-genre of “choose your own adventure” games – which has branched out in the past few years into much more, no pun intended – is based, at its crudest, on having a branching tree in front of you, and you make choices to move yourself down either this branch or that branch, where you then make another choice, on and on. There is some limit to this: an infinite branching tree structure is obviously impossible, but even a relatively dense one can be a major feat to create. There have been volumes written about how to create games like this, so I won’t delve into the design side. Ignoring the challenges of creating such works, we can see that the structure is a bit like that I described above, with a bunch of possibilities laid out in front of you and you having to make decisions about which way to go to explore a different area of the “great tree”.

There is one key difference, though, and this is what I found really interesting and worthy of writing about here, after all this, as it pertains to IF.

When I think about the notion of the “the Secret plus the multi-verse”, it’s not so much that you are making explicit decisions at each quantum branch point. That’s effectively impossible and probably inconsistent to even contemplate (since your knowledge of the quantum branch to any sort of detail would influence it in a way that would make it not what you thought it was anyway. It might even turn it into a cat). What it is, really, is that what you create in your mind gives you a destination among the branches. You are then drawn toward that universe where what you contemplate exists. It’s not that you push your way through the branches. You are pulled.

I had been researching various path following algorithms for something I was working on when the light bulb went off, and the connection was made.

What if, instead of structuring a CYOA game as a bunch of nodes where the player makes a choice at each node to move on to some subsequent branch, pushing their way through the web, the game is set up as a “tree of possibilities” where each choice the player makes gives them an affinity for some further along tree location? And the engine then moves the player step by step at each game turn along different branches until they reach there, but all the while giving the player choices that influence both the destinations and the path.

As the player makes more and more choices, the future target, as well as the path, change as they’re drawn toward different things based on their choices. Different choices provide different gravity. It’s a bit like Google Maps, where you say “I want to get from point A to point B”, but then you can drag the path around a bit where it might pass right by the old historic site you wanted to see, or it might go near the river, or it might go through the seedy part of town. The path you end up taking would be the sum total of the dynamically changing influences created by the choices you make, not just the instantaneous next step based on your choice and some state. Even making some of the same choices in a replay might not cause the same result, as the other choices you make could pull you along a different path.

The acronym could still be the same but with different words: Choose Your Own Attraction.

I don’t know if such a game engine would be viable or if it would be humanly possible to create a game in it if it were to actually come into being. Rather than thinking about what the player does at each fork in the road and where each fork leads next, you set up a large map and decide how the choices a player makes determine the paths through that map based on what they’re attracted to.

As I said, it might not make sense. I just like the idea of turning things on their heads a bit – or perhaps just myself – and seeing how things look. Exploring my own “tree of possibilities” existing in my own head…

Oscillation

The universe is filled with things that oscillate.

The earth rotates – we have the shift from day to night and back again.

The moon orbits the earth, causing the rise and fall of tides.

The earth orbits the sun. We experience the recurrence of the seasons.

Light and sound have frequencies.

Electromagnetic waves have frequencies.

The device you’re using now to read this (assuming it’s being viewed electronically) has a tiny crystal inside that generates a train of impulses: on / off / on /off. Those impulses drive the rest of the system.

String theory postulates that the universe itself is composed of tiny strings, vibrating, oscillating.

We breathe in and out.

Our hearts beat.

The fundamental operations of our cells may have a feedback oscillation component to them that drives the chemical reactions.

Even the human brain has a frequency, a frequency that changes depending on what we’re doing, slowing when we’re asleep or meditative, faster when we’re deep in thought or otherwise using our gray matter in an active way.

The last one is of particular interest to me. When you look at neural nets in computing, they’re often, at least initially, set up as a sort of function: data comes in, data goes out. You train the network to respond to the inputs, and then you feed it input and see what is gives you for output. For example, a neural net could be trained for the identification of letters in an optical character recognition (OCR) application, where the “read” letter is output based on images presented as input.

But the human brain is more than that.

Part of what makes the human brain incredible is its massive size, in terms of connections. Depending on where you read, the estimates are 100 trillion up to 1000 trillion connections. What is equally critical for me is the fact the it’s not just a one-way network. The human brain is interconnected. It not only receives stimulation from the outside world; it also stimulates itself, continuously. It feeds back. It influences itself. It oscillates.

You see images play before your eyes. You hear that damn song in your head that won’t go away. (“But it’s my head!”) You relive scenes both pleasurable and horrific. You dwell on things that affect your emotional state, even though, for that moment, the stimulus is your own mind.

What does that have to do with IF? Perhaps nothing. But it is an interesting topic for me, and there is a higher level sort of recurrence that might be applicable, which is the recurrence of thoughts and feelings in our mental spaces. You can see an example of this in an earlier blog about associations.

ResponsIF‘s “fuzzy” design with weighted topics, decaying topics, and responses keyed off of those topics seemed to lend itself to experimentation with oscillation.

The first attempt, which was not topic based, was a miserable failure. I tried to simulate feedback based solely on values approaching each other. Even with layers of variables in between, the values kept achieving a steady state, where nothing changed. Not a very interesting state of affairs.

I achieved better success by having both varying state and target values which flip-flopped based on the direction it was going. Not ideal, and not really feedback as such, but it did oscillate. There are a couple of samples in the ResponsIF repository that illustrate these, one being a traffic light and one being an oscillation sample.

I ended up discovering a different approach to recurrence which may hold promise. I think of it as “ripples in the pond”.

The basic setup is to have a long-term topic associated with a lesser-weighted response. The lesser weighting places the response at a lower priority, which means it won’t trigger as long as higher priority responses can. This behavior for this response is to add in a decaying topic. That topic then triggers a response which may itself add in a decaying topic. The fact the topics decay and that their responses add in new fresh topics causes a ripple effect. Topic A triggers topic B which triggers topic C, on and on, to the level desired. Each triggering topic decays, causing its effect to lessen over time. Once the temporary topics decay sufficiently, the original, lower-priority response triggers, and the process begins all over again.

From a normal mental process point of view, this is equivalent to having long-term topics triggering transient short-term topics, with the long-term responses recurring until satisfying conditions are met. A bit like nagging reminders…

This might have simple uses, but it can also have more advanced uses if you have multiples of these all going off at once. Which responses actually get triggered depends on the state of the system: what the various response needs and weights are. This means that the internal state of an NPC can vary over time, both in terms of what has been experienced by the NPC and what its long-term goals are, as well as any short-term influences. And the NPC can influence itself.