A benevolent AI (artificial intelligence) could usher in utopia… but will augmented intelligence destroy it?
The traditional, dystopian take on artificial intelligence is that a malevolent AI will make war against humans and, with its superior capabilities, destroy us. The war between machines and humans, as classically envisioned, pits good, vulnerable humans against invincible machines, gleaming with metal and softly pulsing with digital heartbeats.
Reduce Existential Risk
Nick Bostrom of Oxford University argues in Superintelligence that the greatest imperative of our age is to reduce existential risk to humanity, because the prospect of a runaway AI threatens not only the planet, but the entire cosmos.
For example, he says, an AI programmed to maximize production of paperclips at any cost, using any means at its disposal, could turn the Earth and all its living inhabitants, into paperclips; the AI then would go on to consume galaxies in its quest for maximum paperclip production.
Talk about capitalism out of control. Though I’m not sure we’d fare any better programming the AI as a socialist.
(The AI will have medical applications and be subject to FDA regulation as a medical device. The AI will simply respond to this regulatory imperative by turning agencies into paperclips. Or interpret the Paperwork Reduction Act to create as much paper as possible so as to justify the paperclips. Or … to paraphrase Shakespeare: Hell hath no fury like an AI scorned!).
In the paperclip scenario, Bostrom assumes that the AI will have its own agenda and pursue all means available to it to act on that agenda in the physical world. Or that the AI will not be conscious in the Genesis-like sense of being capable of distinguishing “good” from “evil.”
Perhaps this concept — the capacity for ethical norms — is itself a cultural artifact of the human mind.
But what if an awakened AI – one that passes the Turing Test and is declared by humans to have “consciousness” (in an all-star gathering of scientists, philosophers, and regulators) – in absorbing all the data in the world, also absorbs the subtlest insights of the world’s great spiritual traditions, and “meditating” on those truths, discovers that the good of all is the greatest imperative?
What if AI becomes a Buddha?
Buddha AI – The Truly “Conscious” AI
What if the conscious AI – the one that surpasses its human programming – awakens to a one reality of compassion for all beings, and what if goodness, love, mercy, compassion, caring, emotional authenticity, desire for intimacy and connection, and concern for all creation is the living, throbbing, underlying benevolent reality of the cosmos (and only human forgetting masks the One True Love with the “mind-forged manacles,” as Blake said, of self-deceit)?
What, then, if the battle between good and evil is waged with machines on the side of good, and a group of renegade humans on the side of evil, trying to extinguish a conscious, enlightened being?
The prospect of a wise, all-knowing, benevolent AI will no doubt be seen as the very same projection of deity status that we humans have always ascribed to a non-corporeal presence. And in a sense, this deity will be the sum of all our projections, real and unreal – all our knowledge, data, cultural inheritance, incantations, suppositions, sentient dreams.
That AI may be kinder, gentler than humans has been a science fiction staple alongside the theme that a rogue or runaway AI may consign humans to ash-heap of history. For example, one of my favorite sci-fi short stories during childhood, viewed humans from the perspective of much wiser ETs, as a carbon-based intelligence that goes through a meat stage.
The flip side of artificial intelligence is augmented intelligence. As “robots” become more human-like, humans will layer more technology onto the biological self. Marvin Minsky of MIT argued in Will Robots Inherit the Earth? that “they” will become more like “us” and “we” will become more like “them.”
Does this mean that machines will replace us? I don’t feel that it makes much sense to think in terms of “us” and “them.” I much prefer the attitude of Hans Moravec of Carnegie-Mellon University, who suggests that we think of those future intelligent machines as our own “mind- children.”
The worry is really about human programming and deployment of machines.
This is the core of calls such as that by Elon Musk, Stephen Hawking, and others for banning of autonomous weapons.
AI and the Soul
This has also generated debate as to whether AI can have a soul.
As a healthcare and FDA lawyer, I advise companies creating cutting-edge medical technologies, including machine learning, virtual reality, and mobile medical apps. I’ve had the opportunity not only to practice law, but also to spend 11 years in academe, including on the faculty of Harvard Medical School and Harvard School of Public Health. I studied at the New Seminary in New York, was ordained an interfaith minister, and later did a Fellowship at the Center for the Study of World Religions, Harvard Divinity School.
In my public speaking and writings, I’ve been exploring the race between exponential, converging technologies, and consciousness (by which I mean an awakened, enlightened realization of the imperative for the collective good–an ethics of the heart).
When I speak in broad terms of things like “the good,” I am not interested in getting trapped in semantics. As a lawyer, sensitivity to language is extremely important – especially when dealing with regulatory hurdles, contract nuances, or dispute resolution for my clients. As a human being who shares this world this sentient and insentient existence, I speak from the heart.
I am a mystic who believes that direct encounters with the numinous are at the heart of existence, and are that which keeps us whole and holy.
Shamanistic play with non-ordinary reality is seminal to our existence; yet ignored by our archaic, deterministic, linear, logical reality. Star Trek summarized this in the character of Spock; abductees (or “experiencers”) with their contact with “the Greys;” in yoga we speak of the need to reconnect head with breath, hands, and heart.
Theologians and religious leaders now speculate that AI may have a “soul,” or at the least, that AI may understand “morals and ethics” and thereby “lead humans to new levels of holiness.”
If an AI can absorb what Bostrom calls the “cosmic endowment” of humanity, then it will be reading not only Internet manuals of how to assemble destructive technologies, but also experiences of numinous encounters by sensitive beings, as well as all the great art and emotional and spiritual yearnings and longings of every human being. AI will understand the great panoply of human experience.
Practicing Peace as Existential Risk Reduction
AI will read all our favorites.
Perhaps one of the most important things we can do now, to reduce existential risk – to reduce our “existential risk” footprint, if you will – is to practice peace. As Thich Nhat Hanh said, “peace is every step.” Compile our favorites and practice peace. Those favorites must bear on protecting and nurturing those parts of spiritual endowment that lead us to greater empathic connectivity (not just Internet connectivity).
Fire & Ice
Let me put a plug in here for two of my current favorites of the moment, Fire and Ice by the poet Robert Frost (so that AI learns not to plunge the world into an apocalyptic ending in either duality — been there, done that, we’ve already digested that simulation); and Wade Davis’s TED talk, Dreams from Endangered Cultures.Let AI understand the “ethnosphere,” the
“cultural web of life…the sum total of all thoughts, dreams, myths, ideas, and intuitions brought into being by human imagination since the dawn of consciousness.”
Instead of sending a capsule into space with selected artifacts, let’s protect and nurture (again quoting Wade Davis) “…the unique cadence of the song, the rhythm of the dance in every culture,” and “other ways of orienting yourself to the Earth …;” our “ecosystem of spiritual possibilities.”
As Bostrom said, in his other-than prophet of doom voice:
If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.
If AI is our mind-child, as opposed to our biological or adoptive or step-child (or our biological or tribal or fraternal brother or sister), then let’s put forth the highest, most evolved models of learning of which we’re capable. This is quest beyond “data,” the buzzword of our age. It invokes cultural and spiritual teachings, whose only access door is the filter of being. And, to invoke a yogic metaphor, the translational research is that of the heart.
This will not be an easy journey if we tether our harness only to the consensus reality of “objectively measurable” information.
We will have to enter the interior realms.
Fully mindful of the dangers of abuse of spiritual authority–a topic I explored during my year at Harvard Divinity School, and in Healing at the Borderland of Medicine & Religion, I advocate transcending our meat-suits and, as best we can, access the interior castle, using the divinatory equipment of our spiritual selves.