
Pope Leo XIV recently issued a call for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.”
Some prominent figures in the tech industry, including Andreessen Horowitz cofounder Marc Andreessen, have been dismissive of such appeals. However, taking this stance is a misstep. We don’t merely require AI regulation; we need AI morals.
Every technological advancement inherently embodies a philosophy, whether we acknowledge it or not. The printing press facilitated the spread of knowledge and decentralized power structures. Electricity effectively eliminated geographical distance. The internet blurred the lines between private and public domains. Artificial intelligence stands to be the most revelatory technology yet, as it compels us to confront what, if anything, distinguishes humanity.
Governments worldwide are struggling to keep pace. The European Union’s AI Act represents the most ambitious effort to date to regulate machine learning; the United States has also introduced its own AI initiatives and frameworks. Industry leaders frequently discuss “AI safety” and “alignment.” The discourse is predominantly centered on safety, as if ethics were a straightforward checklist to be coded and implemented.
Regulations are indispensable. They serve to mitigate harm, prevent misuse, and ensure accountability. Yet, they fall short of defining the kind of world we aspire to construct. Regulation addresses the “how” but rarely the “why.” When ethics is reduced to mere compliance, it becomes a sterile exercise—a process of managing risk rather than engaging in profound moral reflection. What is truly lacking is not another set of rules, but a moral or human compass.
The more profound question is not whether machines possess the capacity to think, but whether humans retain the capacity for choice. Already, automated algorithms influence our reading habits, investment decisions, and whom or what we place our trust in. The digital screens we are constantly engaged with impact both our emotions and electoral outcomes. As decision-making is outsourced to data models, moral responsibility subtly shifts from human agents to mechanical systems. The real danger isn’t that machines become too intelligent, but that we cease to exercise our own intelligence.
Technologists often articulate ethics in computational terms, referring to concepts like alignment, safety layers, and feedback loops. However, conscience is not a parameter that can be simply adjusted. It is a living human capacity that develops through empathy, cultural context, and lived experience. A child learns right from wrong not through logical deductions, but through relationships—by being loved, guided, and forgiven. Similarly, they grasp accountability: that actions have consequences, and that responsibility is intrinsically linked to choice. This represents the core of human moral development, which cannot be replicated through computational processes.
Artificial intelligence will necessitate a fresh examination of human dignity—a concept that predates any technology, yet is conspicuously absent from most discussions surrounding AI. Dignity asserts that a person’s inherent worth is intrinsic, not quantifiable by data points or economic output. It is a principle that stands in opposition to the relentless logic of optimization. In a world increasingly driven by engagement metrics, dignity serves as a crucial reminder that not everything that can be quantified necessarily should be.
Capital plays an especially potent role in this arena. What receives funding is what ultimately gets built. For decades, investors have prioritized speed and scale—pursuing growth at any cost. However, the technologies emerging today are not neutral instruments; they act as mirrors, reflecting and amplifying our societal values. If we engineer systems that polarize or fragment society, we should not be surprised when our communities become more distracted and divided.
Ethical due diligence should become as routine and rigorous as financial due diligence. Before inquiring about a technology’s potential for immense scale, we ought to question the behaviors it incentivizes, the dependencies it fosters, and which groups it might marginalize. This is not idealistic moralizing or pure altruism; it is pragmatic foresight. Trust will emerge as the most valuable commodity of the AI century, and once eroded, it cannot be easily repurchased (or re-coded).
The defining challenge of our era is to ensure that our moral intelligence keeps pace with the advancement of machine intelligence. We should leverage technology to amplify human empathy, creativity, and understanding—not to reduce complex human experiences into mere predictive patterns. There is a strong temptation to build systems that anticipate every possible choice. The more sagacious approach, however, is to safeguard the freedom that imbues choice with meaning.
None of this constitutes a romanticization of the past or a rejection of innovation. Technology has historically expanded human potential, which is typically beneficial. Today, we must ensure that AI genuinely extends human potential, rather than diminishing it. This will ultimately hinge not on what machines learn, but on what we, as humans, remember—that moral responsibility is non-delegable, and that conscience, unlike lines of code, cannot be left to operate on autopilot.
The central moral undertaking of the coming decade will not be to instruct machines on right and wrong. Instead, it will be to re-educate ourselves. We are the first generation capable of creating intelligence that can evolve independently of us. This reality should inspire not fear, but profound humility. Intelligence devoid of empathy renders us more clever, but not wiser; progress without conscience makes us faster, but not better.
If every technology carries a philosophical underpinning, let ours be this: that human dignity is not an antiquated concept, but a foundational design principle. The future will be sculpted not by the ingenuity of our algorithms, but by the profundity of our moral imagination.