Home Sweet Albia
Creatures 1 | Creatures 3 & Docking Station | Miscellaneous
Favorites | COBs | Tools | Genes | Guides
Install | Play | Brains | Biology

Creatures 1 Brains

(30 Jul 2022)

This guide is intended for those who are already familiar with the way creatures' brains work, and want to get into the specifics of what the lobes and neurons and dendrites are up to. I recommend checking out these other resources to learn more:

Table of Contents

  1. Lobe Types
  2. Lobe Size and Location
  3. Neurons and Dendrites
  4. Dendrite Classes
  5. Initializing Dendrite Connections
  6. Perceptible and Mutually Exclusive Lobes
  7. State Value Rules
  8. The Brain in Action

    1. Updating Input Lobes
    2. Updating Output Lobes
    3. Processing Dendrite Reinforcement and Atrophy
    4. Migrating Dendrites, Decision-Style
    5. Migrating Dendrites, Concept-Style
  9. Appendix

    1. Decay
    2. Relax
    3. Lobe Gene Structure

Lobe Types

Creatures' brains are two-dimensional grids of neuron cells organized into rectangular lobes. Each lobe has a specific function and a unique, non-overlapping location within the brain.

Lobe Size and Location

The size and location of each lobe is determined by a creature's lobe genes. The lobes are then adjusted to fit constraints, trimming any part of the lobe that goes past the edges of the brain, and expanding the width of lobes 1-7 until they fit the required number of inputs/outputs (for example, making sure the Drive lobe is at least as wide as the number of drives that exist). Each lobe is also capped to a maximum size of 1024 neurons, and if it gets bigger than that, it's set to be a 32 x 32 square.

Neurons and Dendrites

Each neuron in a lobe has dendrites that connect it to neurons in other lobes. That neuron's state will be calculated based on the combined output of its connected source neurons. Note that dendrites are one-way: signals only travel in one direction along them, from source to sink. Perhaps counterintuitively, that means it's the downstream neuron that does the connecting.

Dendrite Classes

A lobe has two classes of dendrites, each of which can point to one other (source) lobe. Neurons can have multiple dendrites of each class. (In theory there could be any number of dendrite classes, but the Creatures games limit it to two. Probably to prevent complete brain chaos.)

Initializing Dendrite Connections

Dendrite connections are initialized at birth. For each neuron, and each class of dendrite, the first dendrite of that class is connected to the centermost neuron of its source lobe (rounding down to get integer coordinates). Each subsequent dendrite of that class gets connected to a neuron some distance away from that center point - using a random x distance and random y distance, each between 0 and the dendrite class's Fan Width. There is nothing to prevent multiple dendrites, even of the same class, from connecting to the same source neuron.

Perceptible and Mutually Exclusive Lobes

The Perception lobe's first set of neurons copy the output of neurons in some of the other lobes. To prevent these copied neurons from connecting back to their original lobes, for each lobe marked Perceptible and Mutually Exclusive, go through each neuron and set the corresponding Perception lobe neuron's Exclusive property to the index of the perceptible lobe.

State Value Rules

Creature brains also have a number of State Value Rules, or SVRules, which allow for processing some brain values. SVRules are determined by the Creature's genes, stored as a series of operators and operands, and act over a set of variables stored in the State Variable Table (which we'll see in action a bit later). Here's the full list of operators and operands in C1:

The Brain in Action

Every tick, the brain does the following:

Updating Input Lobes

For each neuron, calculate the difference between its current state and the lobe's Neural Threshold. If this value is above zero, set its output to this value and consider the neuron to be active/firing.

Then relax the value of the neuron's current state toward to the lobe's Neural Rest State using the lobe's Neural Decay Rate.

Once all neurons' outputs and states have been updated, check the lobe's Winner-Takes-All flag. If it's turned on, only the neuron with the highest output retains its output value - all the rest revert to an output of zero.

Updating Output Lobes

Update the State Variable Table. This means resetting all the variables (not the constants) to zero, and setting the variables for the chemoreceptors to the values of the lobe's chemoreceptors (each lobe has four).

For each neuron, first set the state values for state, ouput, and threshold to the neuron's state and ouput values, and the lobe's Neural Threshold.

Then calculate the neuron's dendrite signals. For each class of dendrite, look at all the neuron's dendrites of that class and add up all the outputs of their source neurons multiplied by the source neuron's short-term weights. Keep the total signal value between 0 and 255 and then multiply it by the lobe's Neural Attenuation Rate, and store the final signal value in the State Variable Table. Also store the AND'd signal value in the State Variable Table, which is the regular signal value if all source neurons were active, and zero if any were inactive.

Calculate the neuron's new state by running the lobe's Neural SVRule over the State Variable Table.

Now calculate the difference between the neuron's current state and the lobe's Neural Threshold. If this value is above zero, set the neuron's output to this value and consider the neuron to be active/firing.

Then relax the value of the neuron's current state toward to the lobe's Neural Rest State using the lobe's Neural Decay Rate.

Process reinforcement and atrophy for each of the neuron's dendrites (see below).

Once all neurons have been updated, check the lobe's Winner-Takes-All flag. If it's turned on, only the neuron with the highest output retains its output value - all the rest revert to an output of zero.

Migrate disconnected dendrites according to their dendrite class's migration style (see below).

Processing Dendrite Reinforcement and Atrophy

For each dendrite of each dendrite class a neuron has, first set the State Variable Table's state values for its susceptibility, short-term weight, long-term weight, strength, and input (which is equal to the Output of its source neuron).

Calculate the dendrite's new target susceptibility by running the dendrite class's Susceptibility SVRule over the State Variable Table. If the target is higher than the current susceptibility, increment susceptibility by four; if it's lower, decay susceptibility by the dendrite class's Susceptibility Relaxation Rate.

If the dendrite's new susceptibility is higher than zero, it's ready to being reinforced. Calculate a new target short-term weight by running the dendrite class's Reinforcement SVRule and multiplying the result by susceptibility. If this target value is not zero and greater than the difference between short-term and long-term weights, increase the dendrite's short-term weight by the target value minus the difference between weights.

Then relax the dendrite's short-term weight toward its long-term weight using the dendrite class's Short-Term Weight Relaxation Rate. The dendrite's long-term weight also slowly shifts towards its short-term weight: every so many ticks, as determined by the dendrite class's Long-Term Weight Rate, increment or decrement the long-term weight value in the direction of short-term weight.

The dendrite's strength is updated in a similar way. Every so many ticks, determined by the dendrite class's Gain Strength Rate, set strength to the result of the dendrite class's Gain Strength SVRule - but only if the new value is higher than the old one. Every so many ticks determined by the Lose Strength Rate, calculate the Lose Strength SVRule - and if the result is non-zero, decrement the dendrite's strength.

If the dendrite's strength ends up at zero, the dendrite is considered disconnected/loose. Its short-term weight, long-term weight, and susceptibility are set to zero, and its owning neuron's state and output values are set to zero as well.

Migrating Dendrites, Decision-Style

For each dendrite class with Decision-style migration, look at each active neuron (whose output is non-zero) with at least one active neuron in its source lobe.

Find the first disconnected dendrite of the given dendrite class (whose strength is zero).

Pick an active source neuron at random from the neuron's source lobe. If any other dendrite of this dendrite class in this neuron has the same source, give up on migrating this dendrite - we'll try again next tick. Otherwise connect the disconnected dendrite to this new source neuron.

Migrating Dendrites, Concept-Style

For each dendrite class with Concept-style migration, find the highest-index active neuron with any disconnected dendrites to migrate. Next tick, only look at neurons with indexes less than this one, and so on - and if no relevant neuron is found, reset this limit.

Only migrate if there are at least as many active neurons in the neuron's source lobe as there are disconnected dendrites in the current neuron, since each dendrite is going to get their own neuron.

Shuffle the source lobe's list of active neurons to make sure the connection order is random. For each active neuron, pick another random active neuron, and swap their positions in the list.

To figure out how to connect the disconnected dendrites, look at the first n active neurons in the source lobe (where n is the number of disconnected dendrites). Try connecting the disconnected dendrites to this set of neurons, one after another, and check to see if there are any issues with this connection wiring.

If any two neurons in this set are Exclusive to the same lobe, move on to the next set of neurons starting at the next active neuron in the source lobe.

Otherwise, check to see if this wiring isn't a duplicate. Calculate the checksum of the neuron we're migrating and making sure it's not equal to the checksum of any other active neuron in the same lobe. The checksum is calculated by looking at all of a neuron's dendrites, getting the indexes of their source neurons, and both adding them up and multiplying them together. Then combine the sum and product of the indexes into a single 32-bit result that can be compared efficiently: (sum<<16)|(product&0xFFFF). If there is another neuron with the same checksum, move on to the next set of active source neurons.

Keep the first wiring that works. If there are no possible wirings after all that, revert each disconnected dendrite to its original source neuron - we'll try connecting them again next tick.

Appendix

Decay

Decaying a value means decreasing it by some small amount every so many ticks. There are 32 decay rates: The first 8 are zero, meaning the value in question decreases every tick. The remaining 24 rates equal 2^n - 1 (1, 3, 7, 15, etc) so that the value decreases every 2 ticks, every 4 ticks, every 8 ticks, etc. (Note that since decay rate is stored as a value between 0-255, it must first be squished into a range of 0-31.)

When it's time to decrease the value, the current value is multiplied by the decay rate multiplier, the bit-shifted right by 16 to get the new value. The first seven decay rates (0-6) have multipliers of 0, 12965, 29149, 43707, 53520, 59224, and 62300. The last decay rate (31) has a multiplier of 65535. The rest (7-30) all have the same multiplier, 63897.

Relax

Relaxing a value is similar to decaying it, except instead of strictly decreasing it, the value slowly approaches a specific rest value. Every so many ticks, as detemined by the decay rate, the value either increases if the rest state is higher, or decreases if the rest state is lower.

The amount the value needs to decrease or increase is calculated by taking the absolute difference between the current value and the rest value, and multiplying that by the decay rate multiplier before bit-shifting right by 16.

Lobe Gene Structure

Neuron Type:

Dendrite Type:

SV Rule: