Note: If you still haven’t seen Westworld Season 2 and would like to (Season 3 is now complete, get on it!), there’s a spoiler in the first section here you may want to skip.
In the final episode of Westworld Season 2, we get a better understanding of the mysterious purpose of the Westworld park. Rather than being a place where life-like robots entertain guests for profit, Westworld is a place where life-like robots observe guests to replicate their consciousnesses. The script of each human the robots encounter is neatly transcribed in a nice, leather-bound book.
Logan, a human form speaking for the AI brain behind this library, explains that initial attempts at recreating the human psyche failed not because they were too rudimentary, but because they were too complex.
“The truth is that a human is just a brief algorithm — 10,247 lines,” he says.
This is some fun science fiction, and a nice zinger to throw at an audience that believes fervently in its own complexity, but I also think there’s a lot of merit to this line of thinking. Humans and our interactions present as very complex, but the underlying mechanics that drive this complexity will almost certainly prove to be much simpler than our current theories.
Scientific Progress is About Simplification
When understanding a new scientific field, we have no choice but to create lots of theories, perhaps contradictory ones, about anything we can measure, because it’s not clear what’s pertinent and what not. It’s only as we learn more that theories start to collapse and simplify.
Medieval people for example knew how human children were made, but thought mice spawned from meat and sweaty shirts. It would take centuries to understand that nearly all life reproduces sexually, and even longer to understand that all life shares a single common core, DNA, which is used to make all of the proteins and chemicals needed by the tree of life.
In physics, as scientists started to play with things like magnets and keys on kite strings they started to find predictable and testable properties. This already seemed like a grand simplification of a chaotic world, where the best explanation may have previously been a powerful demon or angry god. But, as the separate fields of electricity and magnetism collapsed into a single unified theory, the theories got even simpler.
Consciousness Catching Up
With consciousness I’d say we are somewhere between medieval peasants and gentlemen scholars playing with science projects. We are starting to put together theories within some bounded areas that, while not complete or satisfying to the larger scientific or lay community, are quite compelling.
One example is the Attention Schema Theory, put forward by Michael S.A. Graziano. The theory states that we develop self awareness as a by-product of the complicated models we build to understand the behavior of others. Graziano’s book Consciousness and the Social Brain, absolutely knocked my socks off when I read it.
Other areas of progress are the bayesian or free energy ideas around consciousness, promoted by scholars like Karl Friston and Anil Seth. These ideas suggest that our consciousness is way more about bottom up predictions and minimizing surprise than it is about our top down perceptions.
Mark Solms in his new book, The Hidden Spring, works with Friston and takes these theories a step further to show how consciousness may be based on emotion and much older areas of the brain rather than our recently evolved prefrontal cortex with its advanced perception tools
As I read about all of these ideas, I’m reminded of the famous Indian parable of the blind men examining an elephant. As each blind man paws over a separate part of the animal they describe it as a thick snake, a fan, a spear, a tree truck, a wall and a rope. Of course they are all right. Each of the pieces of the animal are used like those things, but the picture is incomplete. They lack the fundamental framework and context that connects all of these uses.
I do think, however, the future of consciousness understanding is bright. I’ve seen multiple talks recently by aging academics (Solms and Jon Searle) who talked about starting their careers and being kindly pushed away from consciousness as a serious line of inquiry. “Look, in my discipline, it’s OK to be interested in consciousness,” said Searle quoting someone senior to him when he started, “but get tenure first!” This is changing. It’s certainly now possible, as Searle concedes, to get tenured while working on consciousness, and as a result we are seeing many new theories come out.
And the people working on these theories seem to be doing a great job of not only explaining the increasing volume of empirical data about consciousness (behavior studies, FMRIs etc.) but mapping their their theories to other popular ideas like integrated information theory and the global workspace. Graziano, for example, does a great job of this in Consciousness and the Social Brain.
New research in AI is also particularly promising. As we start to look at AI tools like transformers, we starting to see direct connections between theories about human attention and cognition and how those may be mimicked in software. With these approximations in place we’ll soon be able to test hypotheses about where consciousness processes live and what they require for operation. In fact Solms, in the last chapter of Hidden Spring, goes so far as to say that the ultimate test of his hypothesis will only come once they have instantiated it in software, something he’s actively working on.
With this boom in research and AI my feeling is it won’t be long before we’ll have a theory of consciousness simple enough for a middle school science teacher to explain to his students, and I couldn’t be more excited.