
We are now deeply entrenched in the AI era, with new features or tasks that AI can accomplish emerging weekly. Yet, given our progress, it becomes vital to step back and contemplate larger questions about our trajectory, how to maximize this evolving technology’s potential, and indeed, how to maximize our own potential as we co-evolve with it.
A recent interview with Sam Altman on Tucker Carlson’s podcast provided a revealing moment. Carlson pressed Altman on ChatGPT’s moral underpinnings, asserting that the technology possesses a fundamental religious or spiritual element, as we perceive it to be more powerful than humans and often seek its counsel. Altman countered that, in his view, it holds no spiritual essence. Carlson then posed: “So if it’s merely a machine, solely a product of its inputs, then the two evident questions are: what exactly are those inputs? And what moral framework has been integrated into this technology?”
Altman then referred to the “model spec,” which is the set of instructions that governs an AI model’s behavior. For ChatGPT, he explained, this involves training it on humanity’s “collective experience, knowledge, and learnings.” However, he added, “we then must align its behavior in a specific direction.”
And that, naturally, brings us to the renowned alignment problem—the concept that to safeguard against the existential risk of AI gaining control, we must align AI with human values. This notion dates back to 1960, when AI pioneer Norbert Wiener described the alignment problem thus: “If we employ a mechanical agent to fulfill our objectives, and we cannot effectively intervene in its operations… it is imperative that we are absolutely certain the purpose embedded within the machine is the purpose we genuinely intend.”
However, there exists a more expansive alignment problem that predates 1960. To align AI with human values, we ourselves must possess clarity regarding the universal values we subscribe to. What constitute our own inputs? What is our individual “model spec”? What are we training ourselves on to cultivate meaningful lives?
These are the inquiries we must address before deciding what inputs we wish AI to draw upon. Even if we could perfectly align AI with humanity’s present state, the outcome would be suboptimal. Therefore, defining our values now is essential before developing a technology intended to incorporate and mirror them.
Because at this very moment, we are experiencing a profound state of misalignment. In our contemporary world, we have lost our connection to the spiritual foundation upon which both Western and Eastern civilizations were constructed. For centuries, we have existed within its afterglow, but now even that luminescence has faded, leaving us unmoored and untethered.
That foundation began its gradual erosion during the Enlightenment and the Industrial Revolution, yet it continued to draw upon these eternal truths. We articulated them—and even believed in them—less frequently than before, but they still provided guidance. However, with this connection now severed, to align AI with human values, we must first unearth and re-establish these connections.
In his new book, , Paul Kingsnorth explores how every culture is fundamentally built upon a sacred order. “This does not, of course, need to be a Christian order,” he writes. “It could be Islamic, Hindu, or Taoist.” The Enlightenment disconnected us from that sacred order, but, as Kingsnorth observes, “society failed to perceive this because the monuments of the ancient sacred order remained, much like Roman statues after the Empire’s collapse.” What becomes evident is the cost societies incur when that order disintegrates: “disruption across all societal tiers, from politics down to the individual soul.”
Is this not precisely what is unfolding right now? “It would account for the era’s peculiar, strained, unsettling, and exasperating mood,” Kingsnorth writes.
During his conversation with Carlson, Altman spoke of AI being trained using humanity’s “collective experience.” Yet, are we, as humans, genuinely accessing the entirety of the human collective experience?
As we develop a transformative technology poised to alter every aspect of our lives, it is crucial to ensure its training incorporates the fundamental, enduring values that distinguish humanity.