Skip to content

The Rabbit Hole

  • Home
  • About..
    • Mispy Haven
  • Modus Mutuus
  • Liberatio Mystica

Category: Ethics

Virtue, Vehicle, and Status Quo

Posted on December 19, 2025 - December 19, 2025 by rabbitrunriot

Mutualism is the natural mode of being for people: our natural social philosophy, more or less. We can look at the natural dynamics of hunter-gatherer societies, past and present, to confirm this[1]. When the competition, fabricated scarcity, and synthetic social divisions that collectively plague modern life under liberal capitalism are eliminated, what remains — mutualism — is our natural social dynamic. Capitalism and liberalism have stolen that from us but there are plenty of intellectual exposés and angry rants that address this topic so we’re not going to waste more time on that[2].

Instead, we’re going to get meta.

There are virtues we are naturally born valuing — liberty, loyalty, compassion, reason, and truth — and then there are the vehicles society tells us are available to express those virtues — social honors, traditions, contracts, etc. Somewhere along the line, mainstream society lost sight of the distinction between these virtues and the vehicles through which they are expressed.

As an example, modern society tends to laud soldiers. Veterans of World War 2 allied armies might be respected for holding antifascist values — values which united them in their cause. The honor of being a soldier, in that era, was earned by demonstrating those virtues on the battlefield. In contrast, today, soldiers expect respect for being members of the military, regardless of the virtues and values they hold and the struggles they’ve faced. The essence of “soldier” has lost its substance of “virtue.”

For those who can see the distinction, either inherently or after some effort, between the values and their vehicles, and who dare to question the validity of those vehicles as continuing to effectively express the values they allegedly represent, there is a choice to be made. Do you accept the lie, or go against the grain?

Choosing to go against the grain — to be a genuine expression of those values which mainstream society has lost sight of in favor of vainglorious pursuits — will always be the more difficult path path. At certain points in your life, it may be necessary to tread the middle-way between them. But don’t get trapped there.

Whatever your virtues and core values are, if you participate in a society that caricatures those values in vehicles whose essence has become devoid of substance, you are submitting to living a lie at a fundamental level. Every day, when you wake up, you will more or less be lying to yourself when you tell yourself you are satisfied and happy with your life — somewhere, deep down, you will know that’s bullshit.

You’ll try therapy, drugs, alcohol, sports, sex… and everything will make you feel good… for a brief period. But nothing can sustain satisfaction and contentment because you’re living contrary to the principles you prize most. If you value truth, as most people naturally do, and your fundamental modus operandi is effectively a lie you try to sell yourself on every day, how do you think that’s going to work out psychologically?

The “powers that be” (which are not a monolith, mind you; they are not always coordinating and cooperating, but they draw from the same well of power and influence) have long since discovered the benefit of attaching the virtues to social constructs so that the latter can be weaponized for the purpose of controlling the masses. To that end, everything from armies to churches have been used to manipulate the will of the masses and direct it towards agendas they may otherwise oppose were their values, virtues, and ideals laid out plainly.

This illusion of simile — that the essence of some social construct is synonymous with and inseparable from its virtuous substance — is part and parcel of the banality of evil, as Hannah Arendt would have seen it[3]. It is our responsibility, as revolutionaries, to take note of where these false equivalencies have been institutionalized, and separate the virtue from the vehicle once again so that such atrocities do not repeat themselves. This can be a difficult process and we often find that we are doing a bit more than just separating the wheat from the chaff, proverbially speaking.

In some cases, in fact, the intertwining of virtue and vehicle is so convoluted that separating the two for effective critique and evolution cannot be done without creating conflicts. These conflicts can all be boiled down to whether or not a virtue will be expressed in the context of the social lie[4] …or in spite of it. In the case of the latter, there is an implicit challenge in facing the resistance and derision of the people and institutions that will look down on the individual who questions the status quo. Those with the courage to persist will find that they are often “outcasts” and “pariahs,” and may find that their convictions cost them family, friends, and success in the world. However, the alternative — accepting the lie — creates an internal disharmony rooted in a sense (however subconscious it may be) that one is “living a lie” and the struggle becomes an internal tempest of depression, anxiety, and unfulfilled dreams of self-actualization.

Because of the proclivity of the individual to seek to define themselves — per the status quo of liberal consumerism — when individuals incorporate these socio-cultural composites of virtue-and-vehicle into their self-definition and self-actualization process, they struggle to separate challenges and critiques aimed at the vehicle from attacks on themselves, personally. Similarly, we associate certain virtues and values with people we respect, making it difficult to separate the virtue from the vehicle when a living example of their cohesion still bears heavily on our experiential worldview.

All of these things cause subjective/objective incongruence and disharmony, the reconciliation of which demands either accepting the subjective reality (“the lie” — the status quo) or rejecting it as a vehicle of virtue and value. The status quo depends on the masses’ inability to separate virtues from their social vehicles, and our psychological needs and processes complicate this process, reinforcing the very paradigms that we revolutionaries seek to challenge and overturn in service to evolution.

We can find a temporary reprieve from the stresses of evolution in certain concessions to the status quo that allow us to “cohabitate” with its proponents. Many revolutionaries, during less polarized times, end up in the punk subculture, for instance. While punk is outside mainstream and tends to be “radical and revolutionary,” it has a degree of acceptability as a subculture. However, as time progresses, evolution must continue, and eventually we are forced to confront the disharmony.

The most difficult part of this process, and of actualizing ourselves according to our own values (as opposed to within the context of “the lie”), comes when we must separate the virtue from the vehicle in a way that challenges others around us to, whether passively or directly, to at least acknowledge the discrepancy. For those “others” who have depended on the status quo for their sense of stability and security, this may create relationship conflicts.

A child raised by parents who patronize the status quo is going to meet resistance when choosing a life that disregards the status quo or actively questions and opposes it. But these virtues, and the need to express them, are not just a matter of politics and social graces; they are matters of lifestyle choices and values, expressed at every level of our being, actions, and thoughts. So, when that child, raised by parents who have followed the blueprint and believe the path to a respectable and satisfying life is by doing the same, announces that they are venturing out to be a homeless peace activist, the parents are mortified.

“We raised you better than that! Have some class and self-respect! You can’t save the world, shouldn’t you figure out how you want to contribute as a productive member of society?” This rejection of a challenge to the status quo places the burden of reconciling the conflict entirely on the child and, unfortunately, results in broken relationships and deep-seated psychological trauma.

In a group of activists, you are unlikely to not find at least one whose pursuit of justice and liberty has come at a high cost. And none will know, better, how badly western liberalism can destroy a person’s psyche than those who have lost a parent, sibling, or child to the confusion of vehicle-with-virtue. In such a case, the person who refuses to separate vehicle and virtue is effectively insisting that the world’s conception of virtue evolve to suit their needs, and that brings us full circle back to the impacts of western liberalism and consumer culture.

To be continued…

Endnotes

  1. See a more detailed examination of hunter-gatherer society for comparison on LibCom.org.
  2. Criticism of the social impacts of liberal capitalism and consumerism date from Thorstein Veblen’s brilliant 1899 work, The Theory of the Leisure Class, to Robert Putnam’s contemporary study, Bowling Alone.
  3. See Hannah Arendt’s Eichmann in Jerusalem, which has been published as both an article for The New Yorker and a full-length book.
  4. The social lie (or “the lie”) just refers to the idea that vehicles are inseparable from their virtues — a misconception which helps maintain the supremacy of the status quo.
Posted in Anarchy, Ethics, Psyche, RamblingsTagged anarchy, paradigms, perspective, psyche, ramblings, social constructs, status quo, subjective vs objective, the lie, virtue

Kill the cop that sleeps inside you

Posted on December 13, 2025 - December 13, 2025 by rabbitrunriot

In order to fight authoritarianism, you must become a beacon of its antithesis: liberty. Securing personal freedom and autonomy only makes you a liberated person. To be a liberation advocate, you must actively fight to secure the liberty of others. The most difficult part of this process, which most people will not ever get past (especially since we want people to trust our vision as a form of self-validation), is killing their inner cop.

“Fuck you, I’m not a cop!”

No, that’s not what I mean. Chill out for a minute and let me explain…

Liberation is not just about securing more liberty for yourself and compatriots; it is also about reducing the proliferation of authoritarianism in whatever way we are able. When we claim liberty for ourselves, it becomes an ethical duty to secure and preserve it for others. We tend to focus this fight outward, in our opposition to oppressive social institutions. For all the good this does, though, we tend to ignore the (critical) inner dimensions of our ego and identity.

No matter how much we believe in and prize freedom and egalitarianism, our ego has an innate drive to enforce itself and it’s identity on the world around us due to conditioning of the environment (western liberalism) in which it has developed. In other words, the world that shaped us has taught us that self-actualization comes through objectively imprinting your “mark” on the world — domination, expansionism, imperialism — whether this is as a militant colonizer nation or a cutthroat Wall Street shark makes no difference. This is the life we’re supposed to want in the modern neoliberal west.

The ego’s strong attachment to our identity in the world means that any perceived attack on that identity — our emotions, ideas, and opinions, or our reason, logic, and wisdom; that is, our subjective and objective conception of the world — is taken as a personal attack. The perception of an attack then results in the conception of a conflict and the ego seeks to assert itself and its truth. The act of enforcing its opinion and ideas on the outside world without giving the options for “compromise,” “opting out,” or “agreeing to disagree,” becomes a de facto authoritarian dynamic… your inner cop has come out to assert how big he is.

This builds society in an intrinsically hierarchic way. Where your ego triumphs, you are “above” those people; where it submits, you are “below.” When everyone in a community is stuck in this arrogant and self-righteous cycle of trying to assert themselves as individuals, the community cannot help but tend towards vertical organization. This means that those “below” are naturally positioned to oppose those “above,” and society becomes conflict-oriented and authoritarian organically.

It is, therefore, imperative that we learn to recognize authoritarian tendencies in ourselves, and have the objectivity and humility to call ourselves out. We are responsible, first and foremost, for ourselves. If we cannot embody the behaviors of a liberator, nobody has any reason to believe liberation is possible.

And, crucially, this is not a one-time process. It’s something we must be actively engaging in — every moment of every day. Be mindful of your actions and thoughts — police them yourself (so other people don’t have to, and so you don’t get tempted to police other people). Cultivate humility. Learn to be accepting of failure and faults, and seek compromise and cooperation — always — instead of conflict, schism, and war.

Posted in Anarchy, Ethics, PhilosophyTagged anarchy, authoritarianism, ideology, inner cop, liberation, paradigms, perspective, philosophy, self-honesty

Domestication and Schism

Posted on December 9, 2025 - December 10, 2025 by rabbitrunriot

When you domesticate a wild animal, you cannot just put it in a cage and expect it to stay content. No matter how much food you give it, it will try to escape and run away. And every time, it will make a greater effort not to be caught again.

To domesticate an animal, you must gain its trust first. Then, the domestication process becomes a continual dialectic. In order for it to be successful, the animal must feel satisfied more by continuing to participate in the dialectic than to run away. Over time, the animal will become a companion, and that trust becomes a bond.

But if you break the trust, you break the bond. Sometimes it can be mended, if it’s addressed quickly and prioritized accordingly. But the relationship is forever changed, and if there is no amicable solution, schism occurs. Schism can be avoided, however, if the relationship dynamic is allowed to evolve to resolve the inconsistency/incongruence (whether in one of its parties or the dynamic itself); otherwise, the issue that causes schism will become its own unresolved loop.

The idea of winners and losers is a fallacy of modern society: nobody wins in a schism — bonds are broken, enemies can be made, and conflict arises. Conflict sets other patterns in motion as the schismed parties go their separate ways and then, before the original conflict can be resolved, the fractalized conflicts and loops (“subs”) that spiral out from the schism need all to be resolved, first.

If the schismed parties are lucky, and they are able to reconnect after resolving their respective “subs,” they may have an opportunity to resolve the conflict, again. More likely than not, they will find it again in another party, and will continue to be tested by, over and over, until they are able to reach an amicable solution rather than a schism.

Thus, when the conflict is over broken trust, it’s essential that both parties see the situation from the subjective perspective of the other, in the context of the objective reality, in order for it to evolve and be preserved. This demands extreme and brutal self-honesty from both parties involved — something which contemporary society, capitalism, liberalism, and consumerism have all played a part in suppressing. And they’ve been quite successful.

This self-honesty must come from walking backwards through the dialectic of influences and dynamics of the inconsistency or incongruence that created the conflict until you arrive at its root. This enables you to find and amend the root problem and break out of the loop. Sometimes we get lucky and have this clarity sort of spontaneously as an epiphany; sometimes it takes time to work out through deconstruction. Either way, identifying it allows us to confront and resolve the issue. Maybe we can’t entirely have it “our way” — but we can absolutely try to synergize with the world instead of fight against it.

Posted in Ethics, Philosophy, RamblingsTagged dialectic, gnosis, philosophy, ramblings

Ethics in AI Proliferation

Posted on December 6, 2025 - December 6, 2025 by rabbitrunriot

Tech companies are rushing AI-powered products to launch, despite extensive evidence that they are hard to control and often behave in unpredictable ways. This weird behavior happens because nobody knows exactly how—or why—deep learning, the fundamental technology behind today’s AI boom, works. It’s one of the biggest puzzles in AI. (Heikkilä)

The artificial intelligence (AI) revolution has begun, shouldering moral philosophy with a variety of new and unprecedented dilemmas. Despite a great deal of public discussion regarding the ethics of applied AI, there is mostly silence regarding the ethical nature of developing AI in the first place. We often ask what we should and should not do with artificial intelligence while neglecting whether we should develop and use this technology in the first place. One suspects that this discussion is so often ignored not least because its conclusions do not support our desire.

Artificial intelligence technology is largely based on mimicking the human nervous system. It is, therefore, unsurprising that machine learning also mirrors its human archetype. Skills and knowledge are acquired by experience and repetition and, for artificial intelligence, that experience comes from training data. As the foundation of knowledge for AI, the diversity and integrity of this training data determine the AI’s capabilities, opinions, perceptions, and understanding of information – for better or worse (The Software Development Blog). Artificial intelligence is not immune to “bad” training data in the same way that a toddler is still susceptible to picking up undesirable behaviors if they are exposed to them. Researchers have already confirmed that patterns uncovered in training data can and do result in native biases that affect operation, often surfacing in unpredictable ways (Hao).

This is a valuable opportunity to appreciate how little we know and understand – not just about the world around us, but about our own technology – and what those limitations imply. Little to no transparency in processing means the logical paths an AI follows to reach a given conclusion based on training data and inputs remains a mystery. This obfuscation imposes limits on our ability to understand and predict logical functionality under a given set of conditions, severely limiting our ability to predict operational behavior (Heikkilä).

This uncertainty becomes more problematic in the context of pursuing The Singularity – the creation of artificial general intelligence (AGI). The Singularity represents the point where machine learning outpaces human intelligence, advancing exponentially faster and leaving humanity in the proverbial stone age (Jeevanandam). This is, of course, problematic because humanity will be even more crippled by the inability to anticipate and keep pace with advancing AI technology.

There is an apparent pattern in the last eight millennia of human history where innovation initially promises some immense benefit before rendering serious consequences. These consequences often come in the form of secondary problems that result in worse situations in general for humanity. We can trace this pattern all the way back to the agricultural revolution:

Scholars once proclaimed that the agricultural revolution was a great leap forward for humanity [… and as] soon as this happened, they cheerfully abandoned the grueling, dangerous, and often spartan life of hunter-gatherers[. …] That tale is a fantasy. […] Rather than heralding a new era of easy living, the Agricultural Revolution left farmers with lives generally more difficult and less satisfying than those of foragers. Hunter-gatherers spent their time in more stimulating and varied ways, and were less in danger of starvation and disease. The Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather, it translated into population explosions and pampered elites. The average farmer worked harder than the average forager, and got a worse diet in return. (Harari 78-79)

Similarly unintentional, but nonetheless harmful, consequences follow from many events that are often remembered fondly as milestones of progress despite their downstream consequences. The Industrial Revolution facilitated a great deal of convenience, profit, and innovation but it also resulted in overcrowded cities, pollution and environmental damage, the abuse and exploitation of workers, and the proliferation of unhealthy lifestyles (Rafferty). The discovery of fossil fuels offered humanity a reliable and abundant fuel source, but their proliferation is a known primary catalyst of climate change (Council on Foreign Relations).

All other conditions aside, there is a common theme of innovative capacity and zeal outpacing knowledge and understanding. Our ignorance persistently cripples our ability to predict impacts and outcomes. Hubris wins out and these innovations forever alter our way of life. With regard to the creation of artificial intelligence, humanity must once again choose between temerity and prudence.

Hasty innovation and reckless implementation of any new technology is undeniably an expression of excessive courage; what Aristotle called the vice of rashness. “The courageous man,” writes the philosopher, “is he that endures or fears the right things and for the right purpose and in the right manner and at the right time, and shows confidence in a similar way” (Aristotle 159 (III.vii.5)). Simply being able to describe the development and implementation of AI as “hasty” and “reckless” seems to violate the primary characteristics of this courageous fear, but we can be more objective. The historical pattern of impetuous innovation suggests a persistent failure to fear the right things in the right way at the right time – whether by choice or circumstance. Rather than fear the possible consequences of artificial intelligence, we confidently persevere in spite of them. Aristotle continues: “he who exceeds in confidence [in the face of fearful things] is rash. […] The rash, moreover, are impetuous […]” (Aristotle 159, 161 (III.vii.7,12)).

The hubris that seems to drive this temerity is, itself, an exercise in vanity and empty pride. Similar to rashness, these vices – being excesses of magnanimity and ambition – are deviations from the right expression of their corresponding virtues: “it is possible to pursue honor more or less than is right and also to seek it from the right source and in the right way” (Aristotle 229 (IV.iv.2)). This “right-versus-wrong” dynamic, for Aristotle, is essential to maintaining a virtuous character. To that end, he introduces the intellectual virtue of prudence, or “practical wisdom,” to facilitate this understanding.

Of prudence, Aristotle tells us that “it is a truth-attaining rational quality, concerned with action in relation to things that are good and bad for human beings” (Aristotle 337 (VI.v.4)); of prudent people, that “they possess a faculty of discerning what things are good for themselves and for mankind” (Aristotle 339 (VI.v.5)). The explicit mention of discerning things that are good for mankind is particularly relevant. Our rash, vain, and prideful approach to the proliferation of artificial intelligence violates that criteria, instead prioritizing what is convenient, innovative, profitable, and worthy of acclaim, even if it is potentially detrimental. Neither our rashness nor our vanity and hubris reflect any practical wisdom in this situation.

Kant tells us that we must treat all rational beings as an end unto themselves (Kant 42 (4:429)). Kant does not explicitly define the characteristics of a rational being but suggests throughout his work that will, reason, and freedom are essential qualities that differentiate rational beings from their non-rational counterparts. Rational beings are uniquely endowed with the ability to rationalize, the freedom to choose a course of action, and a personal will to enact it; they are self-determinant (Kant 27 (4:412)).

Upon achieving the Singularity, engineers will have created a true rational being by Kant’s standards, although some may insist that AI is already a rational being. This is a metaphysical and ontological question that goes beyond the scope of the present discussion, but it is essential to a Kantian analysis of the ethics surrounding the proliferation of artificial intelligence. Nonetheless, because AI at least has the potential to become a rational being, and it is not immoral to hold a non-rational being in high moral regard (treated as an end in itself) but the same cannot be said for the inverse (treating a rational being only as a means), the present assessment will presume artificial intelligence to be a rational being.

For Kant, “an action from duty has its moral worth […] in the maxim according to which it is decided upon” and “duty is the necessity of an action out of respect for the [categorical imperative]” (Kant 15-16 (4:399-400)). As rational beings must be treated as ends, the only valid maxim for creating one is that it may pursue its own ends by whatever means it chooses. Would it always and universally be desirable to create such a superintelligent, rational being to pursue its own ends – including, possibly, an agenda that is hostile to humanity – when that technology is available to us? The answer is, of course, “absolutely not,” as this would also demand that we create such a being, even if it is guaranteed to be hostile to humanity. A simple but effective categorical imperative in this situation may therefore demand that we never create something that could be (or become) a rational being, including artificial intelligence. It then becomes duty to adhere to this principle – this law – for action to be morally justified.

But what if the benefits of creating such a superintelligent rational being could be said to justify violating this precept? John Stuart Mill believed that “[a]ll action is for the sake of some end, and rules of action […] must take their whole character and color from the end to which they are subservient” (Mill 2). In an almost Epicurean appeal, Mill explains how his greatest happiness principle – that “actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness” (Mill 7) – is firmly rooted in the belief that “pleasure and freedom from pain are the only things desirable as ends; and that all desirable things […] are desirable either for pleasure inherent in themselves or as means to the promotion of pleasure and the prevention of pain” (Mill 7). Additionally, Mill prescribes varying degrees of pleasure and pain (Mill 8-10), compounding the complexity of assessment and demanding that certain ends be treated with more or less priority and significance than others.

Geoffrey Hinton, the renowned “godfather of AI,” shocked the world when he publicly disavowed his life’s work to speak out against the proliferation of artificial intelligence. Hinton believes machines will surpass human intelligence and our capacity for learning, ultimately displacing humans as the dominant intelligence. Hinton also warns of less existential but still cataclysmic threats, ranging from exploitation by malicious actors to impacts on the job market (Southern). And Hinton is not alone: according to a Pew Research study, an overwhelming majority – just shy of 90 percent – of the US public has at least some concern about the risks of artificial intelligence. Additionally, for 51 percent of Americans, that concern outweighs any excitement or optimism (McClain, Kennedy and Gottfried).

In a report published by the British government, (Bengio, Mindermann and Privitera) the many concerns surrounding the development and implementation of artificial intelligence are detailed across three classes:

  • Malicious use: the use of AI to spread disinformation, manipulate public opinion, and enhance malicious actors’ existing cyber offensives.
  • Malfunctions: functionality issues in which an AI operates outside of expectations, ranging from bias in AI models to loss-of-control (“rogue AI”) scenarios.
  • Downstream systemic impacts: secondary consequences of implementation, including disruptions to the labor market and economy, environmental impact, privacy challenges, etc.

That artificial intelligence “going rogue” is considered a very real possibility should be alarming, and the authors of the report note that ignorance of our own technology is not least to blame (Bengio, Mindermann and Privitera). This is, however, not the only threat; even less extreme possibilities – no less nurtured by technological ignorance – could have potentially disastrous consequences. Hackers may exploit a power grid with an AI-enhanced attack to hold entire segments of the population at ransom; large businesses may adopt AI-driven robots to recover considerable profit margins by replacing human labor, resulting in massive layoffs, rampant unemployment, and economic stagnation; an erroneous logical path in a healthcare AI may result in prescribing the wrong dose or medication to millions of patients around the world. These scenarios are not the existential crisis of an AI that develops an agenda to exterminate humanity, but they are nonetheless significant enough to cause widespread disruption and catastrophe.

There is, however, a much more fundamental problem surrounding artificial intelligence. Despite little public acknowledgment of the issue, artificial intelligence is wholly unsustainable. The environmental impact and resource demands of artificial intelligence mean that it is a net contributor to existing environmental and energy crises (Bashir, Donti and Cuff). This largely goes ignored in favor of emphasizing the potential for benefit:

As with many large-scale technology-induced shifts, the current trajectory of [generative AI], characterized by relentless demand, neglects consideration of negative effects alongside expected benefits. This incomplete cost calculation promotes unchecked growth and a risk of unjustified techno-optimism with potential environmental consequences, including expanding demand for computing power, larger carbon footprints, shifts in patterns of electricity demand, and an accelerated depletion of natural resources. This prompts an evaluation of our currently unsustainable approach toward [generative AI’s] development, underlining the importance of assessing technological advancement alongside the resulting social and environmental impacts. (Bashir, Donti and Cuff)

OpenAI – one of the leading private innovators of AI technology – has also acknowledged that AI carries an inherent risk. However, OpenAI maintains that artificial intelligence “has the potential to give everyone incredible new capabilities” and “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility” (OpenAI). And while this sounds remarkable, it amounts to little more than a collection of buzzwords that offer little (if any) justification for the proliferation of a potentially harmful new technology.

For a more detailed account of the benefits of artificial intelligence, it seemed apropos to give artificial intelligence an opportunity to advocate for itself. As if it were aware of the opportunity, the chatbot offered concrete details about benefits that AI might provide, split between those benefits that have already manifested and those expected in the future.

Benefits already rendered by applied artificial intelligence include improved diagnosis and treatment in healthcare, autonomous vehicles, enhanced climate modeling technologies, and AI-powered chatbots (DuckDuckGo). Potential future benefits include further healthcare improvements (e.g., disease prediction, robotic surgery), energy and sustainability solutions with AI-optimized smart grids, the development of smart cities with AI-enhanced infrastructure, and the emergence of precision farming and automated harvesting in agriculture (DuckDuckGo).

It is noteworthy that, although these benefits could potentially contribute to solutions for some of humanity’s biggest problems (the climate and energy crises, for example), there is no implication of a comprehensive “solution for everything;” in fact, the chatbot seemed agnostic of the expectations humanity has for artificial intelligence. Despite romanticization by the media, gearheads, and zealous Silicon Valley visionaries, AI may simply not be the key to solving our problems. The unrealistic expectation for this kind of near-miracle is yet another side effect of our ignorance: we cannot be any more certain of the emergence of grand solutions than we are of the potential for catastrophe.

Imagine a scenario in which AI contrives a solution to the climate crisis; this would be an amazing step forward for humanity. But, at the same time, AI has steadily been replacing human workers as businesses realize the immense potential to recover profit with an automated work force. Businesses that cannot afford to keep up fail, or are absorbed by larger companies, while unemployment soars amidst mass layoffs. The economy inevitably grinds to a halt and the chasm between wealthy and poor becomes unbridgeable. Eventually the climate solution is abandoned, having been deemed “not profitable enough to pursue” by an economics AI trained on capital-first ideology – like the IMF’s assertion that “the motive to make a profit” is “the essential feature of capitalism” (Jahan and Mahmud). It should not go unnoticed that, being machines, the total destruction of the planet would be of little consequence to AI; a machine has no vested interest in sustainability and, as such, no reason to prioritize environmentalism over profit, were it ever given agency in such a circumstance.1

In this scenario, artificial intelligence yields an immense benefit: a solution to the climate crisis. From a utilitarian perspective this might suggest an ethically sound course of action were it not offset by a disastrous consequence. Recalling Mill’s assertion that all actions are not of equal merit, and assuming that guaranteed harm is of greater significance than a possible solution to a problem we already have viable solutions for,2 this course of action results in a net harm and would, therefore, be considered unethical.

The opposite is also true. We can imagine a scenario in which artificial intelligence results in the rise of a technological utopia. Such a result is (presumably) beneficial for humanity, both presently and in the future, and represents a net good. Unfortunately, one of the more significant limitations of utilitarianism becomes quickly apparent here. Without a precedent for reference, or the ability to confidently predict possible outcomes, we can manipulate ethical judgment by interpolating any criteria, circumstances, and conditions necessary to yield a desired result.

Given all that has been said, can the proliferation of artificial intelligence ever truly be ethical? Ironically, it seems that acknowledging our ignorance and discontinuing further development is the first demand of an ethical approach to the proliferation of artificial intelligence, regardless of which moral philosophy is used to assess the situation. Future generations can then resume development of artificial intelligence when:

  • we can exercise prudence (curtailing our temerity and acknowledging our limitations) and develop the courage to do what is necessary even when it contrasts with what we want;
  • we can ensure that, in developing an artificial intelligence, we do not inadvertently create a rational being; and
  • our knowledge and understanding are sufficient to predict, accurately and with confidence, what consequences will realistically follow from the proliferation of AI, ensuring our chosen course of action always results in a maximum benefit and a minimum detriment.

The problem with this – as with many ethical dilemmas – is that it juxtaposes what we should do with what we want to do. History has shown that, time and time again, humanity will prioritize convenience, innovation, or greed over social responsibility and their moral sense, regardless of how moral judgment is made. Whether it be the Agricultural Revolution, the Industrial Revolution, the present reluctance to adopt a more sustainable lifestyle despite a looming existential threat, or any one of the many other instances of our impetuousness and hubris, humanity consistently chooses what it desires – convenience, profitability, acclaim, and novelty – rather than what is dutiful (as Kant may have put it), and one cannot expect the proliferation of artificial intelligence to be an exception.

Bearing this in mind, it is unlikely that we will see this idealized approach become our reality. Despite the potential for benefit, and for all the good that artificial intelligence may do, the impact of even the less extreme risks could be catastrophic and would surely outweigh any benefits. More importantly, our lack of understanding and knowledge surrounding artificial intelligence and consciousness leaves us fundamentally at a loss to answer some of the most basic questions about AI technology – such as its status as a rational being – which may preclude an appropriate moral assessment in the first place. Regardless, our present approach to the development and implementation of artificial intelligence remains at odds with ethical philosophy from multiple perspectives and a moral justification for the proliferation of AI is, for all intents and purposes, untenable at the present.

Endnotes

1 One may perhaps speculate that we could avert this scenario by implementing ethical reasoning, hard-coded failsafe decisions, or some other means of interrupting the logic, but this comes with its own set of problems and uncertainties. A post-Singularity AI could reasonably just alter its own codebase to override these protections; other decisions are more nuanced or require an Ethics of Caring approach (this being arguably inaccessible to an AI, which may lack the capacity for “caring”) on a case-by-case basis. Such a solution would also require a well-defined logical model for ethical reasoning, agreeable to all persons, and capable of being applied universally by the AI; as moral philosophers have, after almost three millennia, made no significant progress on developing such a framework, this undertaking is simply beyond current capabilities

2 Climate change already has a wide range of proposed solutions that are comparable to the expectations of AI-generated solution (Turrentine). Arguably, the most effective and guaranteed solution (at least to prevent further damage) is complete cessation of emissions, likely resulting in the climate and ecosystem restabilizing over the course of several decades (Moseman and Sokolov). Although this is within the realm of virtual possibility, it is usually pre-emptively disregarded as untenable due to the demand placed on humanity to actively and collectively adopt new lifestyles. There is a separate moral dilemma here which needs to be addressed in its own regard.

References

Aristotle. Nicomachean Ethics (Loeb Classical Library). Trans. H. Rackham. Vol. LCL 73. Cambridge: Harvard University Press, 1934.

Bashir, Noman, et al. “The Climate and Sustainability Implications of Generative AI.” MIT, 2024. 13 April 2025. https://mit-genai.pubpub.org/pub/8ulgrckc/release/2.

Bengio, Y., et al. “International Scientifice Report on the Safety of Advanced AI: Interim Report.” 2024. April 2025. https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai.

Council on Foreign Relations. “How to Lower Energy-Sector Emissions.” 20 September 2024. Council on Foreign Relations. 3 May 2025. https://education.cfr.org/learn/reading/energy-sector-emissions.

DuckDuckGo. Mistral Small (3) [Large Language Model]. 2025. 13 April 2025. https://www.duck.ai.

Hao, Karen. “This is how AI bias really happens—and why it’s so hard to fix.” MIT Technology Review 4 February 2019. 6 April 2025. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

Harari, Noah Yuval. Sapiens: A Brief History of Humankind. New York: HarperCollins Publishers, 2015.

Heikkilä, Melissa. “Nobody knows how AI works.” MIT Technology Review 5 March 2024. 11 April 2025. https://www.technologyreview.com/2024/03/05/1089449/nobody-knows-how-ai-works/.

Jahan, Sarwat and Ahmed Saber Mahmud. “What Is Capitalism?” 2025. International Monetary Fund. 9 May 2025. https://www.imf.org/en/Publications/fandd/issues/Series/Back-to-Basics/Capitalism.

Jeevanandam, Nivash. “What is AI Singularity: Is It a Hope or Threat for Humanity?” 19 November 2024. Emeritus. 18 April 2025. https://emeritus.org/in/learn/what-is-ai-singularity/.

Kant, Immanuel. Groundwork for the Metaphysics of Morals (Oxford World’s Classics). Trans. Christopher Bennet, Joe Saunders and Robert Stern. Oxford: Oxfor University Press, 2019.

McClain, Colleen, et al. “How the U.S. Public and AI Experts View Artificial Intelligence.” Pew Research Center, 2025. 12 April 2025. https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/.

Mill, John Stuart. Utilitarianism. Ed. George Sher. 2nd. Indianapolis: Hackett Publishing Company, Inc., 2001.

Moseman, Andrew and Andrei Sokolov. “How long will it take temperatures to stop rising, or return to ‘normal,’ if we stop emitting greenhouse gases?” 19 December 2024. MIT Climate Portal. 31 May 2025. https://climate.mit.edu/ask-mit/how-long-will-it-take-temperatures-stop-rising-or-return-normal-if-we-stop-emitting.

OpenAI. “Planning for AGI and beyond.” 24 February 2023. OpenAI. 12 April 2025. https://openai.com/index/planning-for-agi-and-beyond/.

Rafferty, John P. “The Rise of the Machines: Pros and Cons of the Industrial Revolution.” 30 September 2017. Britannica. 5 April 2025. https://www.britannica.com/story/the-rise-of-the-machines-pros-and-cons-of-the-industrial-revolution.

Southern, Matt G. “Top 5 Ethical Concerns Raised by AI Pioneer Geoffrey Hinton.” 2 May 2023. Search Engine Journal. 7 April 2025. https://www.searchenginejournal.com/top-5-ethical-concerns-raised-by-ai-pioneer-geoffrey-hinton/485829/.

The Software Development Blog. “How Does AI Learn? Demystifying Training Data, Algorithms, and Models.” 12 July 2024. The Software Development Blog. 6 April 2025. https://blog.sdetools.io/how-ai-learns/.

Turrentine, Jeff. “What Are the Solutions to Climate Change?” 13 December 2022. _Natural Resources Defense Council._ 24 May 2025. <https://www.nrdc.org/stories/what-are-solutions-climate-change>.

Posted in Ethics, TechnologyTagged artificial intelligence, ethics, philosophy

Recent Posts

  • Virtue, Vehicle, and Status Quo
  • Kill the cop that sleeps inside you
  • Domestication and Schism
  • Transcending Anarchism?
  • Ethics in AI Proliferation

Recent Comments

No comments to show.

Archives

  • December 2025
  • November 2025
  • October 2025

Categories

  • Anarchy
  • Ethics
  • Mutualism
  • Philosophy
  • Psyche
  • Ramblings
  • Technology
  • Uncategorized

Links

  • Sherwood Forest Collective
  • Industrial Workers of the World (IWW)
  • ICL-CIT
  • RiseUp.net
  • May First Technology
  • AnarchistNews
  • Anarchy Planet
  • Newlane University
Proudly powered by WordPress | Theme: micro, developed by DevriX.