A possibly entirely irrelevant thought experiment

What if we could emulate Darwinist evolution in a digital environment?
The methodology of evolution in this theory can arguably be broken down to merely two basic principles: Change is brought by chance, random mixture and mutation of the information contained within the genotype, followed by a process of selection causing the more successful changes to persist while sorting out the others. This in turn leads to the impression of a directed development, which it actually is, but in an indirect way.

Now couldn’t we take those two principles and build them into a virtual system? For real randomness an external source can be used, e.g. a radioactive compound. How the data is modeled, how mutations can be applied to it and how the selection process needs to be designed are more tricky questions, directly related to the goal, that should be achieved. They determine the characteristics of the simulation.
Each iteration yields a new generation of data on which the process is recursively reapplied until infinity, or until the expected improvements of each new generation fall below a defined threshold, which means the data has reached the desired quality.

For sake of simplicity let’s explain this with a very basic example: Think of a chess problem, white wins with a certain number of moves. Let’s assume we are playing against a perfect opponent that makes no mistakes.
We start by randomly picking one of the available legal chess moves in each turn, creating a chain of moves until the end of the game. Doing this will most likely result in very dumb “play” and defeat. We then create numerous offspring from our parent data by randomly replacing any of the moves with others, and then evaluate the results as well as the parent. If all move chains cause us to lose the game, we take the data where we lasted the most number of moves before checkmate as a new parent, mutate, evaluate, and so on.
Eventually one of the move chains will lead to our opponents kings demise. All loser chains can now be discarded, as we have basically reached a new genus, the winning chain. Now we reverse the comparison and look for the chain with the shortest amount of moves to checkmate. At some point we will get an optimal solution to the problem. Further randomization won’t bring any improvements and will therefore be discarded.

Of course this example is not very useful, because we usually already know the solution to this kind of problem and even if not, we could utilize that “perfect opponent” to calculate our winning strategy. On top of that, it will perform poorly because the design is rather bad. However much more complex implementations of other problems with unknown outcomes shouldn’t be impossible to engineer.
In theory the concept could be taken one step further by taking completely random input data, and simply run it on the machine to test if it makes any kind of sense. This would require strong encapsulation and solid error handling of the runtime environment, otherwise the behavior would be completely unpredictable which usually translates to an instant crash. And for evaluation a highly sophisticated system would be required that, depending on the semantics of “making sense”, may lie beyond the capabilities of software architecture.

Now with this concept we just constructed, we can make some interesting observations:
The data structures reside inside their own confined system, which itself is subordinate to our world and incorporates a subset of our world’s rules. In other words, the awareness scope of the data is confined to the boundaries of the system it is contained in. The only possible connection to the higher world is through projection from sensory peripherals connected to the machine. Yet, while unable to become aware of it, the data models do exist in our world as well, are observable for us and do usually serve a higher purpose for this external world.

I guess by now you are aware of the parallels I am drawing here. It’s a very rough and incomplete concept but in my opinion it poses an interesting approach to ponder about the fabric of reality and where we stand in the greater cosmic everything.
It also provides a typical argument for the existence of (a) god. While it might be statistically possible for such a system to spontaneously pop into existence, it still seems just extremely implausible. As an agnostic this actually makes me believe it to be more likely that some kind of higher level entity has enabled our existence. But where is this leading? Would that entity be put into place by yet another superordinate force? And there are still other ways to think about this, like if we were to assume that absolutely every possibility actually exists, well, we’d also have to.

In any case, what also can be derived from this is yet another suggestion why there are things that we just don’t understand and maybe never will.
But rest assured that we won’t ever stop trying :]

One thought on “A possibly entirely irrelevant thought experiment

  1. Maybe I shouldn’t have written this in English, since it’s kind of a difficult topic to write in a foreign language. At the very least it was a good exercise.
    Just hope it’s not too much confusing gibberish 😉

Leave a Reply

Your email address will not be published.

WordPress Anti-Spam by WP-SpamShield