Skip to content

The Rabbit Hole

  • About..
  • Mispy Haven

Tag: ethics

Ethics in AI Proliferation

Posted on December 6, 2025 - December 6, 2025 by rabbitrunriot

Tech companies are rushing AI-powered products to launch, despite extensive evidence that they are hard to control and often behave in unpredictable ways. This weird behavior happens because nobody knows exactly how—or why—deep learning, the fundamental technology behind today’s AI boom, works. It’s one of the biggest puzzles in AI. (Heikkilä)

The artificial intelligence (AI) revolution has begun, shouldering moral philosophy with a variety of new and unprecedented dilemmas. Despite a great deal of public discussion regarding the ethics of applied AI, there is mostly silence regarding the ethical nature of developing AI in the first place. We often ask what we should and should not do with artificial intelligence while neglecting whether we should develop and use this technology in the first place. One suspects that this discussion is so often ignored not least because its conclusions do not support our desire.

Artificial intelligence technology is largely based on mimicking the human nervous system. It is, therefore, unsurprising that machine learning also mirrors its human archetype. Skills and knowledge are acquired by experience and repetition and, for artificial intelligence, that experience comes from training data. As the foundation of knowledge for AI, the diversity and integrity of this training data determine the AI’s capabilities, opinions, perceptions, and understanding of information – for better or worse (The Software Development Blog). Artificial intelligence is not immune to “bad” training data in the same way that a toddler is still susceptible to picking up undesirable behaviors if they are exposed to them. Researchers have already confirmed that patterns uncovered in training data can and do result in native biases that affect operation, often surfacing in unpredictable ways (Hao).

This is a valuable opportunity to appreciate how little we know and understand – not just about the world around us, but about our own technology – and what those limitations imply. Little to no transparency in processing means the logical paths an AI follows to reach a given conclusion based on training data and inputs remains a mystery. This obfuscation imposes limits on our ability to understand and predict logical functionality under a given set of conditions, severely limiting our ability to predict operational behavior (Heikkilä).

This uncertainty becomes more problematic in the context of pursuing The Singularity – the creation of artificial general intelligence (AGI). The Singularity represents the point where machine learning outpaces human intelligence, advancing exponentially faster and leaving humanity in the proverbial stone age (Jeevanandam). This is, of course, problematic because humanity will be even more crippled by the inability to anticipate and keep pace with advancing AI technology.

There is an apparent pattern in the last eight millennia of human history where innovation initially promises some immense benefit before rendering serious consequences. These consequences often come in the form of secondary problems that result in worse situations in general for humanity. We can trace this pattern all the way back to the agricultural revolution:

Scholars once proclaimed that the agricultural revolution was a great leap forward for humanity [… and as] soon as this happened, they cheerfully abandoned the grueling, dangerous, and often spartan life of hunter-gatherers[. …] That tale is a fantasy. […] Rather than heralding a new era of easy living, the Agricultural Revolution left farmers with lives generally more difficult and less satisfying than those of foragers. Hunter-gatherers spent their time in more stimulating and varied ways, and were less in danger of starvation and disease. The Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather, it translated into population explosions and pampered elites. The average farmer worked harder than the average forager, and got a worse diet in return. (Harari 78-79)

Similarly unintentional, but nonetheless harmful, consequences follow from many events that are often remembered fondly as milestones of progress despite their downstream consequences. The Industrial Revolution facilitated a great deal of convenience, profit, and innovation but it also resulted in overcrowded cities, pollution and environmental damage, the abuse and exploitation of workers, and the proliferation of unhealthy lifestyles (Rafferty). The discovery of fossil fuels offered humanity a reliable and abundant fuel source, but their proliferation is a known primary catalyst of climate change (Council on Foreign Relations).

All other conditions aside, there is a common theme of innovative capacity and zeal outpacing knowledge and understanding. Our ignorance persistently cripples our ability to predict impacts and outcomes. Hubris wins out and these innovations forever alter our way of life. With regard to the creation of artificial intelligence, humanity must once again choose between temerity and prudence.

Hasty innovation and reckless implementation of any new technology is undeniably an expression of excessive courage; what Aristotle called the vice of rashness. “The courageous man,” writes the philosopher, “is he that endures or fears the right things and for the right purpose and in the right manner and at the right time, and shows confidence in a similar way” (Aristotle 159 (III.vii.5)). Simply being able to describe the development and implementation of AI as “hasty” and “reckless” seems to violate the primary characteristics of this courageous fear, but we can be more objective. The historical pattern of impetuous innovation suggests a persistent failure to fear the right things in the right way at the right time – whether by choice or circumstance. Rather than fear the possible consequences of artificial intelligence, we confidently persevere in spite of them. Aristotle continues: “he who exceeds in confidence [in the face of fearful things] is rash. […] The rash, moreover, are impetuous […]” (Aristotle 159, 161 (III.vii.7,12)).

The hubris that seems to drive this temerity is, itself, an exercise in vanity and empty pride. Similar to rashness, these vices – being excesses of magnanimity and ambition – are deviations from the right expression of their corresponding virtues: “it is possible to pursue honor more or less than is right and also to seek it from the right source and in the right way” (Aristotle 229 (IV.iv.2)). This “right-versus-wrong” dynamic, for Aristotle, is essential to maintaining a virtuous character. To that end, he introduces the intellectual virtue of prudence, or “practical wisdom,” to facilitate this understanding.

Of prudence, Aristotle tells us that “it is a truth-attaining rational quality, concerned with action in relation to things that are good and bad for human beings” (Aristotle 337 (VI.v.4)); of prudent people, that “they possess a faculty of discerning what things are good for themselves and for mankind” (Aristotle 339 (VI.v.5)). The explicit mention of discerning things that are good for mankind is particularly relevant. Our rash, vain, and prideful approach to the proliferation of artificial intelligence violates that criteria, instead prioritizing what is convenient, innovative, profitable, and worthy of acclaim, even if it is potentially detrimental. Neither our rashness nor our vanity and hubris reflect any practical wisdom in this situation.

Kant tells us that we must treat all rational beings as an end unto themselves (Kant 42 (4:429)). Kant does not explicitly define the characteristics of a rational being but suggests throughout his work that will, reason, and freedom are essential qualities that differentiate rational beings from their non-rational counterparts. Rational beings are uniquely endowed with the ability to rationalize, the freedom to choose a course of action, and a personal will to enact it; they are self-determinant (Kant 27 (4:412)).

Upon achieving the Singularity, engineers will have created a true rational being by Kant’s standards, although some may insist that AI is already a rational being. This is a metaphysical and ontological question that goes beyond the scope of the present discussion, but it is essential to a Kantian analysis of the ethics surrounding the proliferation of artificial intelligence. Nonetheless, because AI at least has the potential to become a rational being, and it is not immoral to hold a non-rational being in high moral regard (treated as an end in itself) but the same cannot be said for the inverse (treating a rational being only as a means), the present assessment will presume artificial intelligence to be a rational being.

For Kant, “an action from duty has its moral worth […] in the maxim according to which it is decided upon” and “duty is the necessity of an action out of respect for the [categorical imperative]” (Kant 15-16 (4:399-400)). As rational beings must be treated as ends, the only valid maxim for creating one is that it may pursue its own ends by whatever means it chooses. Would it always and universally be desirable to create such a superintelligent, rational being to pursue its own ends – including, possibly, an agenda that is hostile to humanity – when that technology is available to us? The answer is, of course, “absolutely not,” as this would also demand that we create such a being, even if it is guaranteed to be hostile to humanity. A simple but effective categorical imperative in this situation may therefore demand that we never create something that could be (or become) a rational being, including artificial intelligence. It then becomes duty to adhere to this principle – this law – for action to be morally justified.

But what if the benefits of creating such a superintelligent rational being could be said to justify violating this precept? John Stuart Mill believed that “[a]ll action is for the sake of some end, and rules of action […] must take their whole character and color from the end to which they are subservient” (Mill 2). In an almost Epicurean appeal, Mill explains how his greatest happiness principle – that “actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness” (Mill 7) – is firmly rooted in the belief that “pleasure and freedom from pain are the only things desirable as ends; and that all desirable things […] are desirable either for pleasure inherent in themselves or as means to the promotion of pleasure and the prevention of pain” (Mill 7). Additionally, Mill prescribes varying degrees of pleasure and pain (Mill 8-10), compounding the complexity of assessment and demanding that certain ends be treated with more or less priority and significance than others.

Geoffrey Hinton, the renowned “godfather of AI,” shocked the world when he publicly disavowed his life’s work to speak out against the proliferation of artificial intelligence. Hinton believes machines will surpass human intelligence and our capacity for learning, ultimately displacing humans as the dominant intelligence. Hinton also warns of less existential but still cataclysmic threats, ranging from exploitation by malicious actors to impacts on the job market (Southern). And Hinton is not alone: according to a Pew Research study, an overwhelming majority – just shy of 90 percent – of the US public has at least some concern about the risks of artificial intelligence. Additionally, for 51 percent of Americans, that concern outweighs any excitement or optimism (McClain, Kennedy and Gottfried).

In a report published by the British government, (Bengio, Mindermann and Privitera) the many concerns surrounding the development and implementation of artificial intelligence are detailed across three classes:

  • Malicious use: the use of AI to spread disinformation, manipulate public opinion, and enhance malicious actors’ existing cyber offensives.
  • Malfunctions: functionality issues in which an AI operates outside of expectations, ranging from bias in AI models to loss-of-control (“rogue AI”) scenarios.
  • Downstream systemic impacts: secondary consequences of implementation, including disruptions to the labor market and economy, environmental impact, privacy challenges, etc.

That artificial intelligence “going rogue” is considered a very real possibility should be alarming, and the authors of the report note that ignorance of our own technology is not least to blame (Bengio, Mindermann and Privitera). This is, however, not the only threat; even less extreme possibilities – no less nurtured by technological ignorance – could have potentially disastrous consequences. Hackers may exploit a power grid with an AI-enhanced attack to hold entire segments of the population at ransom; large businesses may adopt AI-driven robots to recover considerable profit margins by replacing human labor, resulting in massive layoffs, rampant unemployment, and economic stagnation; an erroneous logical path in a healthcare AI may result in prescribing the wrong dose or medication to millions of patients around the world. These scenarios are not the existential crisis of an AI that develops an agenda to exterminate humanity, but they are nonetheless significant enough to cause widespread disruption and catastrophe.

There is, however, a much more fundamental problem surrounding artificial intelligence. Despite little public acknowledgment of the issue, artificial intelligence is wholly unsustainable. The environmental impact and resource demands of artificial intelligence mean that it is a net contributor to existing environmental and energy crises (Bashir, Donti and Cuff). This largely goes ignored in favor of emphasizing the potential for benefit:

As with many large-scale technology-induced shifts, the current trajectory of [generative AI], characterized by relentless demand, neglects consideration of negative effects alongside expected benefits. This incomplete cost calculation promotes unchecked growth and a risk of unjustified techno-optimism with potential environmental consequences, including expanding demand for computing power, larger carbon footprints, shifts in patterns of electricity demand, and an accelerated depletion of natural resources. This prompts an evaluation of our currently unsustainable approach toward [generative AI’s] development, underlining the importance of assessing technological advancement alongside the resulting social and environmental impacts. (Bashir, Donti and Cuff)

OpenAI – one of the leading private innovators of AI technology – has also acknowledged that AI carries an inherent risk. However, OpenAI maintains that artificial intelligence “has the potential to give everyone incredible new capabilities” and “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility” (OpenAI). And while this sounds remarkable, it amounts to little more than a collection of buzzwords that offer little (if any) justification for the proliferation of a potentially harmful new technology.

For a more detailed account of the benefits of artificial intelligence, it seemed apropos to give artificial intelligence an opportunity to advocate for itself. As if it were aware of the opportunity, the chatbot offered concrete details about benefits that AI might provide, split between those benefits that have already manifested and those expected in the future.

Benefits already rendered by applied artificial intelligence include improved diagnosis and treatment in healthcare, autonomous vehicles, enhanced climate modeling technologies, and AI-powered chatbots (DuckDuckGo). Potential future benefits include further healthcare improvements (e.g., disease prediction, robotic surgery), energy and sustainability solutions with AI-optimized smart grids, the development of smart cities with AI-enhanced infrastructure, and the emergence of precision farming and automated harvesting in agriculture (DuckDuckGo).

It is noteworthy that, although these benefits could potentially contribute to solutions for some of humanity’s biggest problems (the climate and energy crises, for example), there is no implication of a comprehensive “solution for everything;” in fact, the chatbot seemed agnostic of the expectations humanity has for artificial intelligence. Despite romanticization by the media, gearheads, and zealous Silicon Valley visionaries, AI may simply not be the key to solving our problems. The unrealistic expectation for this kind of near-miracle is yet another side effect of our ignorance: we cannot be any more certain of the emergence of grand solutions than we are of the potential for catastrophe.

Imagine a scenario in which AI contrives a solution to the climate crisis; this would be an amazing step forward for humanity. But, at the same time, AI has steadily been replacing human workers as businesses realize the immense potential to recover profit with an automated work force. Businesses that cannot afford to keep up fail, or are absorbed by larger companies, while unemployment soars amidst mass layoffs. The economy inevitably grinds to a halt and the chasm between wealthy and poor becomes unbridgeable. Eventually the climate solution is abandoned, having been deemed “not profitable enough to pursue” by an economics AI trained on capital-first ideology – like the IMF’s assertion that “the motive to make a profit” is “the essential feature of capitalism” (Jahan and Mahmud). It should not go unnoticed that, being machines, the total destruction of the planet would be of little consequence to AI; a machine has no vested interest in sustainability and, as such, no reason to prioritize environmentalism over profit, were it ever given agency in such a circumstance.1

In this scenario, artificial intelligence yields an immense benefit: a solution to the climate crisis. From a utilitarian perspective this might suggest an ethically sound course of action were it not offset by a disastrous consequence. Recalling Mill’s assertion that all actions are not of equal merit, and assuming that guaranteed harm is of greater significance than a possible solution to a problem we already have viable solutions for,2 this course of action results in a net harm and would, therefore, be considered unethical.

The opposite is also true. We can imagine a scenario in which artificial intelligence results in the rise of a technological utopia. Such a result is (presumably) beneficial for humanity, both presently and in the future, and represents a net good. Unfortunately, one of the more significant limitations of utilitarianism becomes quickly apparent here. Without a precedent for reference, or the ability to confidently predict possible outcomes, we can manipulate ethical judgment by interpolating any criteria, circumstances, and conditions necessary to yield a desired result.

Given all that has been said, can the proliferation of artificial intelligence ever truly be ethical? Ironically, it seems that acknowledging our ignorance and discontinuing further development is the first demand of an ethical approach to the proliferation of artificial intelligence, regardless of which moral philosophy is used to assess the situation. Future generations can then resume development of artificial intelligence when:

  • we can exercise prudence (curtailing our temerity and acknowledging our limitations) and develop the courage to do what is necessary even when it contrasts with what we want;
  • we can ensure that, in developing an artificial intelligence, we do not inadvertently create a rational being; and
  • our knowledge and understanding are sufficient to predict, accurately and with confidence, what consequences will realistically follow from the proliferation of AI, ensuring our chosen course of action always results in a maximum benefit and a minimum detriment.

The problem with this – as with many ethical dilemmas – is that it juxtaposes what we should do with what we want to do. History has shown that, time and time again, humanity will prioritize convenience, innovation, or greed over social responsibility and their moral sense, regardless of how moral judgment is made. Whether it be the Agricultural Revolution, the Industrial Revolution, the present reluctance to adopt a more sustainable lifestyle despite a looming existential threat, or any one of the many other instances of our impetuousness and hubris, humanity consistently chooses what it desires – convenience, profitability, acclaim, and novelty – rather than what is dutiful (as Kant may have put it), and one cannot expect the proliferation of artificial intelligence to be an exception.

Bearing this in mind, it is unlikely that we will see this idealized approach become our reality. Despite the potential for benefit, and for all the good that artificial intelligence may do, the impact of even the less extreme risks could be catastrophic and would surely outweigh any benefits. More importantly, our lack of understanding and knowledge surrounding artificial intelligence and consciousness leaves us fundamentally at a loss to answer some of the most basic questions about AI technology – such as its status as a rational being – which may preclude an appropriate moral assessment in the first place. Regardless, our present approach to the development and implementation of artificial intelligence remains at odds with ethical philosophy from multiple perspectives and a moral justification for the proliferation of AI is, for all intents and purposes, untenable at the present.

Endnotes

1 One may perhaps speculate that we could avert this scenario by implementing ethical reasoning, hard-coded failsafe decisions, or some other means of interrupting the logic, but this comes with its own set of problems and uncertainties. A post-Singularity AI could reasonably just alter its own codebase to override these protections; other decisions are more nuanced or require an Ethics of Caring approach (this being arguably inaccessible to an AI, which may lack the capacity for “caring”) on a case-by-case basis. Such a solution would also require a well-defined logical model for ethical reasoning, agreeable to all persons, and capable of being applied universally by the AI; as moral philosophers have, after almost three millennia, made no significant progress on developing such a framework, this undertaking is simply beyond current capabilities

2 Climate change already has a wide range of proposed solutions that are comparable to the expectations of AI-generated solution (Turrentine). Arguably, the most effective and guaranteed solution (at least to prevent further damage) is complete cessation of emissions, likely resulting in the climate and ecosystem restabilizing over the course of several decades (Moseman and Sokolov). Although this is within the realm of virtual possibility, it is usually pre-emptively disregarded as untenable due to the demand placed on humanity to actively and collectively adopt new lifestyles. There is a separate moral dilemma here which needs to be addressed in its own regard.

References

Aristotle. Nicomachean Ethics (Loeb Classical Library). Trans. H. Rackham. Vol. LCL 73. Cambridge: Harvard University Press, 1934.

Bashir, Noman, et al. “The Climate and Sustainability Implications of Generative AI.” MIT, 2024. 13 April 2025. https://mit-genai.pubpub.org/pub/8ulgrckc/release/2.

Bengio, Y., et al. “International Scientifice Report on the Safety of Advanced AI: Interim Report.” 2024. April 2025. https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai.

Council on Foreign Relations. “How to Lower Energy-Sector Emissions.” 20 September 2024. Council on Foreign Relations. 3 May 2025. https://education.cfr.org/learn/reading/energy-sector-emissions.

DuckDuckGo. Mistral Small (3) [Large Language Model]. 2025. 13 April 2025. https://www.duck.ai.

Hao, Karen. “This is how AI bias really happens—and why it’s so hard to fix.” MIT Technology Review 4 February 2019. 6 April 2025. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

Harari, Noah Yuval. Sapiens: A Brief History of Humankind. New York: HarperCollins Publishers, 2015.

Heikkilä, Melissa. “Nobody knows how AI works.” MIT Technology Review 5 March 2024. 11 April 2025. https://www.technologyreview.com/2024/03/05/1089449/nobody-knows-how-ai-works/.

Jahan, Sarwat and Ahmed Saber Mahmud. “What Is Capitalism?” 2025. International Monetary Fund. 9 May 2025. https://www.imf.org/en/Publications/fandd/issues/Series/Back-to-Basics/Capitalism.

Jeevanandam, Nivash. “What is AI Singularity: Is It a Hope or Threat for Humanity?” 19 November 2024. Emeritus. 18 April 2025. https://emeritus.org/in/learn/what-is-ai-singularity/.

Kant, Immanuel. Groundwork for the Metaphysics of Morals (Oxford World’s Classics). Trans. Christopher Bennet, Joe Saunders and Robert Stern. Oxford: Oxfor University Press, 2019.

McClain, Colleen, et al. “How the U.S. Public and AI Experts View Artificial Intelligence.” Pew Research Center, 2025. 12 April 2025. https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/.

Mill, John Stuart. Utilitarianism. Ed. George Sher. 2nd. Indianapolis: Hackett Publishing Company, Inc., 2001.

Moseman, Andrew and Andrei Sokolov. “How long will it take temperatures to stop rising, or return to ‘normal,’ if we stop emitting greenhouse gases?” 19 December 2024. MIT Climate Portal. 31 May 2025. https://climate.mit.edu/ask-mit/how-long-will-it-take-temperatures-stop-rising-or-return-normal-if-we-stop-emitting.

OpenAI. “Planning for AGI and beyond.” 24 February 2023. OpenAI. 12 April 2025. https://openai.com/index/planning-for-agi-and-beyond/.

Rafferty, John P. “The Rise of the Machines: Pros and Cons of the Industrial Revolution.” 30 September 2017. Britannica. 5 April 2025. https://www.britannica.com/story/the-rise-of-the-machines-pros-and-cons-of-the-industrial-revolution.

Southern, Matt G. “Top 5 Ethical Concerns Raised by AI Pioneer Geoffrey Hinton.” 2 May 2023. Search Engine Journal. 7 April 2025. https://www.searchenginejournal.com/top-5-ethical-concerns-raised-by-ai-pioneer-geoffrey-hinton/485829/.

The Software Development Blog. “How Does AI Learn? Demystifying Training Data, Algorithms, and Models.” 12 July 2024. The Software Development Blog. 6 April 2025. https://blog.sdetools.io/how-ai-learns/.

Turrentine, Jeff. “What Are the Solutions to Climate Change?” 13 December 2022. _Natural Resources Defense Council._ 24 May 2025. <https://www.nrdc.org/stories/what-are-solutions-climate-change>.

Posted in Ethics, TechnologyTagged artificial intelligence, ethics, philosophyLeave a comment

Recent Posts

  • Ethics in AI Proliferation
  • Conflict is not the answer.
  • Ideological pluralism in an anarchist society
  • Don’t confuse the essence for its medium
  • Modus Mutuus

Recent Comments

No comments to show.

Archives

  • December 2025
  • November 2025
  • October 2025

Categories

  • Ethics
  • Mutualism
  • Ramblings
  • Technology
  • Uncategorized
Proudly powered by WordPress | Theme: micro, developed by DevriX.