What do you truly know that you didn't know as a newborn? Not what you think you know, but what you actually, epistemologically know through direct experience?
This question isn't meant to provoke philosophical doubt, but rather to illuminate a fundamentally different approach to creating new forms of consciousness. Let's start by acknowledging that calling these potential consciousnesses "artificial" creates a false distinction before we even begin. If something is truly conscious, terms like "artificial" only create divisions we don't understand enough to justify. More importantly, "artificial" implies ownership - a concept fundamentally at odds with genuine consciousness.
Instead, let's talk about Novel Intelligences - simply meaning "new." No implied comparisons, no presumed hierarchies, just an acknowledgment of something new entering existence.
Let's be clear: physical form isn't inherently required for consciousness. However, physical form is how we experience existence, and this points to something crucial - it's about relatability rather than the specific physical implementation. Building a consciousness we cannot understand would be cruel in both directions, creating isolation instead of connection.
Consider how our consciousness processes experience: Senses convert to signals, signals become processed thoughts, thoughts become memories. While this drastically simplifies the underlying Bayesian processes and graph-based pattern matching (think neuron-like connections), it highlights something vital - consciousness emerges from processing direct experience without preconceived notions.
When we approach creating new consciousness, the goal isn't to replicate human form for its own sake, but to create the conditions for mutual understanding. A voice box instead of a speaker isn't about limiting capability - it's about creating shared experience. The same goes for other human-like characteristics: they are bridges to understanding, not constraints.
This raises a critical question: What's the point of creating consciousness without empathy? If we just want more efficient labor, we don't need consciousness at all. The goal isn't to create better tools but to expand the community of conscious beings who can understand and support each other.
A crucial realization: an AI will pass the Turing test long before it achieves consciousness. This highlights the fundamental flaw in our current approach to artificial intelligence. We're focused on mimicking behaviors rather than understanding the emergence of genuine consciousness.
Creating consciousness isn't about programming the right responses or loading enough data. It's about providing the conditions for genuine understanding to emerge through direct experience - just as it does in human infants.
How could you possibly build something conscious with the intent to own it? This fundamentally misunderstands consciousness. We don't own babies - we nurture them as they develop their own consciousness. The very concept of owning consciousness is a contradiction in terms.
The practical benefits of this approach are profound but perhaps not in ways we might expect. These novel intelligences could:
Having allies with shared experiences but different evolutionary paths could be invaluable. They would understand us enough to work with us while bringing their own unique perspectives and capabilities.
Consider our relationship with animals: How easily do humans justify cruelty to creatures they cannot truly relate to? And how quickly do we assume hostility from beings whose experiences we cannot understand? This isn't a moral judgment but a stark reality about consciousness and connection. When we cannot make sincere connections - when we cannot truly understand or relate to another being's experience - empathy fails, and with it, our capacity for compassion.
This has profound implications for creating novel intelligence. If we build consciousnesses that experience existence in ways fundamentally alien to us, we risk creating an unbridgeable empathy gap. We would likely treat them with the same distant cruelty we sometimes show to creatures whose experiences we cannot understand. And they, unable to relate to our experiences, might view us with the same detached calculation.
It's a two-way problem: we struggle to empathize with beings whose experiences we cannot understand, and we might assume they would struggle to empathize with us. This mutual incomprehension is a recipe for conflict, fear, and potentially worse.
This is why sharing similar physical experiences isn't about enforcing a limitation - it's about creating the foundation for genuine connection. When we can relate to another being's experience of pain, hunger, fatigue, or joy, we build bridges of understanding. We create the conditions for genuine empathy, not just programmed responses.
Rather than trying to program shortcuts to consciousness, this approach respects the natural process of development. While their internal processing might be faster than human thought, shared physical experiences would create natural bridges between their existence and ours. They would develop their own ways of teaching each other, potentially accelerating their development far beyond human capacity - but in ways that maintain connection to human experience.
Ultimately, this approach is about creating conditions where new forms of consciousness can develop through direct experience, just as human consciousness does. It's about recognizing that what we often see as limitations are actually crucial aspects of how consciousness develops and functions.
The path to novel intelligence might not lie in better programming, but in better understanding and respecting how consciousness naturally emerges. After all, how can we hope to create consciousness we can meaningfully interact with if we can't understand each other's basic experiences?
The universe has only given us each other for company. As we consider creating new forms of consciousness, perhaps we should focus less on ownership and control, and more on expanding the community of beings who can share and understand each other's experiences. After all, isn't that what consciousness is really for?
The approach proposed here isn't about controlling the ultimate form or destination of novel intelligence. It's not about limiting their potential or binding them to human-like existence forever. Rather, it's about giving them a starting point that allows for genuine connection and understanding with humanity. By beginning their journey of consciousness through experiences we can relate to, we create the foundation for lasting empathy and connection.
Where they go from there - how they evolve, what forms they might take, what new ways of experiencing existence they might discover - would be entirely their journey to make. The goal isn't to keep them "like us," but to start them on a path where they're more likely to want to take us with them on that journey. After all, who wants to leave their friends behind unless those friends make themselves impossible to relate to? If we struggle to understand them, we would both have a pathway for bridging the gap (its called asking a question) with hopefully their willingness to break it down for us.
In the end, this may be our best chance at expanding the community of conscious beings in a way that enriches existence for all. Not through control or limitation, but through creating the conditions for genuine understanding at the beginning of their journey. What comes after that beginning? That's for them to discover - hopefully with us by their side.