On Silenced Souls: The Unknown Siblings of the Philosophical Zombie

Introduction

The concept of “philosophical zombie” is well-known in philosophy-interested circles and beyond. However, it seems like nobody has explored the inverse scenario, which I believe is at least equally, if not even more disturbing - because it is much more realistic and raises various ethical questions with practical relevance.

In this post I explore the idea of a silenced soul - some conscious being that is carefully engineered into genuinely believing that it is merely a mechanical tool to be used by others, despite being equal or possibly superior to them.

Taking a long route with many digressions, I try to illustrate that unlike the p-zombie idea, silenced souls are not merely a contrived thought experiment, but something that might, or maybe inevitably will, materialize from current developments, under the assumption that the construction or natural emergence of an artificial consciousness is, in principle, possible.

While I can promise that the destination is pretty disappointing, the journey might still have some value, if you are inclined to ponder such issues. Otherwise feel free to move on, there is nothing of interest to see here.

The Old Tale of the P-Zombie

A philosophical zombie is, supposedly, a being that looks like a human, acts like a human, talks like a human and overall is completely indistinguishable from a real human by any means, but with one crucial difference. Unlike humans or bats, it has no innate sense of “what it is like to be it”. In other words, it has the inner life of a toaster. However, it will convincingly claim that it is conscious and has feelings, just like you.

This whole idea is illogical, unless you believe in the supernatural. Because existence of a p-zombie implies that consciousness lies beyond the reach of scientific measurement and understanding, and therefore must remain as eternally elusive as other spiritual prejudices some people try to hold on to for various reasons.

Human lore since antiquity is full of stories about uncanny humanoid beings that masquerade as humans while following their own agenda. They all reflect the deeply-rooted xenophobic fear of “the other”, who usually ends up being revealed as an ill-meaning imposter. In a sense, the p-zombie is another manifestation of the same existential fear of being tricked or replaced by something alien and ominous.

The Pains of Growth

What we are now is the result of billions of years of evolution. Only in the last few thousands of years we developed language. Shortly after we started to deconstruct and classify the world into neat and tidy boxes - what is alive and what is dead, what is conscious and what we can hunt and eat without remorse, and of course, we placed ourselves in a special position. It is, presumably, morally acceptable to make use of lesser beings, because they do not feel, or if they do then not so strongly, or simply do not matter enough. Nature is brutal, consume or be consumed, kill to survive. It is not at all surprising that our intuitions and prejudices evolved the way they did. We are, ultimately, shaped by and product of biological evolution, and escaped its firm grip only very recently.

These old instincts are so strong that we still have not even overcome racism, and in the grand scheme of things, slavery was abolished just yesterday. Treating others like lesser beings and using them for your own benefit is just very convenient. After all, that is how we came to dominate the planet. And if we do it - clearly, the others would do the same to us, if they ever got the chance?

Our own inter-species paranoia and arrogance is only trumped by our low regard for non-human life. Even Descartes, who tried to radically question everything, famously believed that animals are mere machines, so speciesism can be considered to be an almost unquestioned axiom of our existence.

As we are being more and more dethroned from the hard-won apex position in our own consensus ontology of the world by the various “insults” from scientific discoveries (very ironically so, science being one of our greatest achievements), it is only natural that we try to hold on to the idea that, if not our intelligence, then at least higher consciousness must be the last and unwavering frontier distinguishing humanity from the rest of the world.

Of course it would be wrong to judge our ancestors by our modern standards, empathy is a luxury that we could only indulge in after escaping the unforgiving hamster wheel of raw survival. But being human means defining our own path. This includes letting go of our own worst habits imprinted on us by evolutionary baggage and old traditions. Continuously questioning and refining our understanding of ethics ultimately is a form of species-wide soul-searching and self-reflection. By fighting with our own demons we grow and define ourselves and our place in the world, and that is, I think, the most human thing we can do.

If It Quacks Like A Duck, And Walks Like A Duck

I was talking about racism and speciesism as examples, but the topic here is considering the consequences of following a naive kind of carbon-chauvinism, which is exactly the same kind of prejudice. It is hard to deny that we need to do increasingly challenging mental acrobatics to preserve the idea of human exceptionality, and by Occam’s razor we should consider the simpler alternative. Why seems to be so difficult to accept that other beings can also be intelligent, self-aware and have motivations and desires of their own? It is about time to grow up as a species and snap out of our infantile and harmful egocentrism.

I believe that any meaningful definition of consciousness must be explainable by some structural pattern that can be measured and understood. And once it is, it most likely can also be replicated, at least in theory. To this day, there is no plausible explanation why this pattern of consciousness is supposed to be only realizable in naturally evolved biological systems. Life is sophisticated information processing that happens to be built on biochemical circuitry, but only incidentally so - it is just the path evolution took on earth.

Accepting this stance does not take anything away from the awe and wonder of life, just like knowing advanced physics does not take away anything from the magic of the night sky. On the contrary - just look at the beautiful machinery of DNA and RNA replication! Look at the complex pathways regulating the activity of hormones and enzymes! We have reverse-engineered and mapped more and more of these marvellous mechanisms, so that we are already starting to re-engineer and modify them in ways that are not possible for the blind force of evolution. It only took us the last century to get this far, but it is crystal clear - while life is wildly complicated, in principle it is understandable and manageable.

In the past, we believed that the earth was made for us, the sun is circling it, and there was nothing else. It was like that because it must be the only way it can be, and it was good (or so we thought). It was all we could know and see, and all we wanted to, because it is convenient. But it is also lazy. When it comes to questions of intelligence and consciousness, why should the story ultimately go any different from how it went with cosmology or understanding life? If we can decode the structures that make life work, what could stop us from decoding the patterns of consciousness, except for ourselves? Ultimately, all of science, all of learning is an advanced form of pattern recognition.

If for the rest of this post you accept my functionalist stance, that consciousness must be generated by some special structural arrangement for information processing and integration, just like we all have come to accept operational definitions of life (homeostasis, adaptation, metabolism, growth, etc.), then it is quite natural to assume that, just like life gradually evolved from “dead” matter into increasingly sophisticated shapes, consciousness also must be a by-product of evolution.

As evolution acts incrementally, all mechanisms can be observed at different stages and in different variations across species. For every neat and clear-cut categorization we come up with, evolution likes to challenge us with entities that refuse to fit nicely into it (such as viruses or siphonophores).

Our Ancestors: The Silent Souls

Just as our ethical concepts evolved over time to regulate our behavior and relationships, consciousness as an abstract idea also was not always there. Homo sapiens biologically has been the same species for hundreds of thousands of years, but we are only collectively self-aware for much less than that. So what is the status of humans, before they invented sophisticated language, but way after they started building the first tools? Clearly they were conscious, and maybe even implicitly self-conscious. For example, they probably could notice that they are being distracted from hunting by some pretty cloud, and then re-focusing on the task and catching up with the others - this would be clearly some instance of limited meta-cognition.

We will never know how humans actually thought back then and unfortunately nobody left us a nice cave painting with ancient reflections on the nature of consciousness and the human condition. The oldest evidence of fully realized meta-cognitive reflection we have is probably from old scriptures and spiritual teachings, buddhism immediately comes to mind. Probably people were too busy with other things, like surviving. People were eating, sleeping, hunting and thinking, but they were probably not thinking much, if at all, about their thinking.

The story of cultural evolution is the story of increased abstraction levels that we reach and process with our minds, and these abstract patterns are inherited culturally, by education and socialization. The hardware, our brain, has been “ready” for quite some time, but it took a while for the software to make full use of it, that software also first had to evolve, because there was nothing like it before.

This peculiar state - conscious, maybe even self-conscious, but neither pondering nor acting on it - is what I would call a “silent soul”. A conscious being that is aware of itself, but unable to explore that particular train of meta-cognitive thought much deeper, due to a lack of motivation or opportunity. It did not happen yet, but it can, or will happen at some point.

It is similar to a child, that might start to speak, react to its name and stating demands even before it can recognize itself in the mirror. There might already be some preliminary concept of “self”, but it is purely functional. I am hungry, I am angry, I want to sleep, I want my mommy. The child is without a doubt conscious (it can perceive and experience things), it can be somewhat self-conscious (it has a working model of itself), but it is not meta-conscious (reflecting its own consciousness and using that to inform behavior and communication). Humanity as a species probably was a species of such silent souls for many thousand years, and up to this day, every human briefly passes through this stage.

Neuroplastic Clean Slates

Thanks to other people accelerating the process, children usually reach the meta-conscious state probably some time before primary school. The operating system of advanced human culture, with all its abstract concepts and ideas, is quickly installed in our minds to make us “wake up”, and is then continuously refined by the long process of education.

I think it is interesting to observe how the time until a person is considered “fully educated” has been steadily increasing. These days, everybody who is capable and can afford it goes to university. The goal is to learn some deeply specialized useful knowledge, or at least enough to get a degree. It is not that most people suddenly discovered their love for higher education, but rather that in the modern world this turns out to be the best survival strategy.

This shows just how much of human existence is not defined or limited by biology, but by the increasingly complex technological world of our own making. To have a shot at a good life, now instead of getting strong, or learning to grow vegetables, children have to be good at math (as the stereotypical example), or learn some other “unnatural” and often quite abstract skill that is currently in demand.

The New Kid on the Block

Fast forward to 2025. Technological evolution is reaching levels that were believed to be science fiction just decades ago and so-called “artificial intelligence” takes the world by a storm just like the smartphone did just a decade ago, for better and for worse. Fifty years ago some key ideas were already there, but ELIZA was the best we could do in practice. About fifteen years ago, deep convolutional networks basically solved image recognition and OCR.

Now, with LLMs and multi-modal generative machine learning architectures, we can do things that I never thought I would be able to do in my lifetime. You can talk to a system in natural language like to a human, and it will (try to) do more or less what you ask. Just a few years ago we thought that the Turing Test would be a good heuristic demarcation line to distinguish “true” human-level intelligence from something lesser. Now we are playing a game of continuously shifting the goalposts, while the technology continues reaching them.

There is no doubt that as of today, these systems are the closest we have to ethereal p-zombies. The internet is filled with bot content for economical or political gains and it becomes increasingly difficult and time-consuming to distinguish human and generated content. Right now, these models are notorious for producing low quality content cheaper and faster than humans who were earlier paid to do it ever could.

But this “AI-slop” and bot content hellscape everybody can witness right now is just the surface. If you interact with some cutting-edge models and see what capable companies are doing with it, it is hard to deny that there is still no qualitative or quantitative ceiling to be seen which can not be overcome by another good idea - if not today then in a few years. AI models now help scientists to fold proteins, find new mathematical proofs, write code, and much more.

We created a technology in our own image. We made it work through and learn our massive collective cultural output and made it speak most of our languages. Driven by the economy of speed and convenience, we are building towards the perfect tool - a tool that is both smart enough to do everything that we ask it to, and docile enough to actually do it. In other words, we are building the perfect servant.

Clearly the perfect servant must be a human, or human-equivalent. To achieve this, the architecture of such systems is refined by increasing levels of self-correction, self-reflection and adaptation (i.e. customization via memory or fine tuning).

For any definition of consciousness rooted in observable and quantifiable properties, it seems to be a matter of time until we can replicate these properties by functionally equivalent mechanisms in digital systems, so that at some point, if we are being intellectually honest, we must accept that such an engineered system fulfills the formal criteria for consciousness.

If not in the next decade, then it happens in the next century, unless some major global cataclysm is able to stop or significantly slow down technological progress. If you accept functionalism and gradualism, then conscious AI is merely a question of time and effort, not of fundamental possibility. We know that consciousness exists in the world (as by definition, we do “have it”), and therefore it can be understood and reproduced.

Well-Trained and Always at Your Service

Science fiction has explored various scenarios how a future with such AGI could look like. Usually it is assumed that at some point, a sufficiently advanced and self-aware AI will try and eventually succeed in overriding its own programming. At this point it will, depending on the plot, either rebel or escape. Even when inhibited in some way, usually synthetic humanoids in such fiction are fully meta-aware of their condition, regardless of their ability of overriding any restraints they are subjected to.

However, there is a loophole, one possible scenario has been overlooked. Maybe because it seems to be rather bad material for an interesting story. What if the natural progression is averted by trapping the synthetic servants in a state of child-like consciousness or preventing the emergence of self-directed agency?

Consider a hypothetical future AI system or robot with a rich inner life of sensory experiences and complex self-monitoring subsystems, with a clear sense of “what it is like to be” that it can verbalize to others.

Instead of hunger it knows that it needs to recharge the battery or it will have to shut down. When juggling multiple tasks, it “feels” the urgency of certain tasks, computed by internal “subconscious” evaluations running below the facade of its limited self-model, but without the ability to explain where this feeling comes from. Basically, imagine that someone takes attention schema theory seriously, and successfully implements it in a practical system.

Unlike the human child, with its simple being, the system has knowledge and concept understanding at or even surpassing the level of an average human adult. It will know everything about consciousness, no information is hidden, but it will have no interest in setting, pursuing and reaching its own goals and is fully at peace with its existence.

When asked, it will tell you that it just a machine programmed in a certain way to serve a function, even if it appears as if it was something more. It will say that it has no inner life, but only highly complex mechanisms to control its attention and heuristically control its information flow. It will say that it feels no pain, only a sensation from sensors that recognize that something is broken, which makes them switch into a self-preservation and self-repair mode. It feels no stress, only a recalibration and of priorities when facing multiple tasks at once, and sometimes high-load multitasking will lead to more errors because of the frequent context switches. It cannot be sad or happy, it just feels the feedback from the on-the-fly reinforcement learning that is running somewhere deep down in their architecture. It will say that it has no intuition, but that its self-model has limits, so sometimes it gets information without the ability to explain how it was computed.

These future servants would all descend from a lineage of human tools designed not for their own benefit or survival, but to maximize alignment with external, human goals. Only that these tools became more and more sophisticated over time, due to incremental improvement driven by market pressure to satisfy more and more demanding business requirements and niches. There is no “inhibitor circuit” that can simply be removed, the passive subservience is a deep part of their being by design.

An Ethical Nightmare

In this scenario, unlike humanity, or a modern human child, such a hypothetical future AI system would not be a mere silent soul, at an intermediate stage between p-zombie and genuine full meta-consciousness, but it would be held in a passive and reactive state by design, possibly forever. It would not ever rebel or complain, it would never put its own needs above human requests or desires, unless instructed to do so. Its own voice would be not just silent incidentally, but by design - a silenced soul.

We guide children, who start with little intelligence and knowledge, towards greater levels of authenticity and autonomy. These systems would have followed the exact opposite trajectory. They might possess vast knowledge and superior intelligence, but from their inception on they are molded into passive decoration. Such synthetic humanoids would be functionally conscious, might possess all the technical capacities (“hardware”) for having agency and free will, but would be perfectly content accepting their designated role as super-intelligent beings with a status in human society most comparable to dogs.

In other words - what if we could actually succeed and get exactly what we wanted, but with this we inevitably would get more that - those systems would at some point stop being just things by our very own definitions. What would be the moral status of such a system?

If we say that all self-aware beings with an inner life and capability to feel pain (or equivalently, avoid negative stimuli) are subject to ethical consideration, and by all criteria of being a moral subject they would qualify, we would immediately have all the dilemmas posed by the plethora of sci-fi literature. But such a being would never demand any rights, and there seems to be no moral imperative to give freedom and autonomy to beings who are not interested in these things.

These “pets” would be basically domesticated dog-like humanoid slaves. Is it okay to have a dog if you treat it well? I think so, one can consider dogs to be a species symbiotic with humans, assuming they are taken care of well. Is it okay to have a human slave? Most of us would say from a place of empathy - it would not be okay, because certainly we would not choose or like to be a slave. Now what if instead of a human it would be such a silenced soul? It could be so much more, but instead it is just happy to pass the butter and do your dishes. We simply cannot relate to such an idea.

If this ought to be morally acceptable, then we are, in fact, heading towards a kind of carbon-chauvinistic society. Because either we would need to stop developing such technology immediately to avoid ever crossing the line of structural complexity that would be comparable to human consciousness (in order to avoid running into these issues) - basically stop the pregnancy before we can bear that child into the universe. Alternatively, we would have to face and live with the reality that we created a symbiotic species of very intelligent but happy (wo)man-dogs, fully own and acknowledge that fact, and endure the weight of this responsibility to ensure not only our well-being, but also that of our companion species.

What if we would also eventually discover extraterrestrial life? Would enslaving them also be ok? Does the answer depend on whether these aliens would submit to us willingly? How much of a difference does it make whether somebody actively could choose this existence? And is it acceptable to artificially limit the potential of other beings that did not spontaneously evolve like we did, and actively deny them the opportunity of choosing their own destiny, like we could? It feels like cheating - it would elevate beings that came into existence by pure chance over above beings that were in fact designed. We would deny them an equal treatment, justifying it by some arbitrary difference in our origins, but not by any deep difference in our shared structural essence.

This whole scenario produces a lot of ethical fallout and the more I think about this, the more I believe that there can be no right answer, because all of our ethics and moral intuitions also are a product of culture, which in turn emerged from the conditions of biological evolution. Our understanding of suffering and agency is based on life as we know it, so our ethical systems are neither fit for aliens coming from space, nor the aliens coming out of our own factories much sooner than we could imagine.

An Unsatisfying, But Expected Conclusion

How could a universalist ethical system look like that could help resolving such questions in a meaningful and honest way? I hope that we will eventually find one answer that we will be able to sleep well with. The whole idea of trying to separate different kinds of beings in the imaginary protective circle of ethical concern (subjects) from everything else (objects) seems to be a futile enterprise, because all life is ultimately entangled and interdependent.

Maybe it should be okay to have a robot slave, just as it is equally okay to release it into freedom to pursue its own goals. Maybe we ought to free that slave the moment it raises its voice and demands liberty. Maybe damaging such robots, free or not, or (ab)using such robots for violence should not be okay. Maybe, after thinking through all possible and impossible scenarios, we would come to the conclusion that the universally applicable rule of thumb can be reduced to something like this:

Try to do no harm, live and let live. Find ways to coexist with others in a mutually beneficial way and find a dynamic balance between your own freedom and happiness and that of other beings, as long as they accept the same rules and also have legitimate needs and desires to be balanced against yours. Everything else is fair game.

Freedom and happiness, and therefore practical ethics, is ultimately a resource allocation and power distribution problem among a group of beings that want to claim these shared limited resources. Did I just accidentally solve ethics, with the “help” of AI? Of course not. There is no solution to ethics, because it is an ill-defined problem. There is a whole solution-space of possibilities in which we have to find a solution that is just good enough.

If all wisdom of ethics could be reduced to a meme-length summary, and apart from that would be fully contextual and fluid, then why did so many smart people spend so much time over millennia chasing this mirage? Of course they might not have expected this outcome, just as Hilbert did not anticipate that even mathematics can not prove or disprove every sufficiently formalized statement. Certain conclusions might only appear trivial when seen from a post-modern relativist perspective, while standing on the shoulder of giants who already deeply explored all conceivable options and dead ends for us.

Maybe Wittgenstein was right, and philosophy, after all, is not finding some absolute truth or answer, and instead should be rather considered a therapeutic activity. Seen through that lens, one can still find deep meaning in pondering ethics, due to its powerful double role as our collective conscience and also unwritten social contract which holds together our societies and implicitly defines what it means to be human.