On 10 April, a man hurled a firebomb at OpenAI CEO Sam Altman’s house. Two hours later, the same man was arrested while allegedly attempting to break into the company’s headquarters. Days earlier, an Indianapolis city-councillor’s residence was targeted in a shooting reportedly linked to a data centre project. Three days before that, Italian authorities arrested an anarcho-primitivist allegedly preparing attacks inspired by Theodore Kaczynski, the Unabomber.
These are not isolated events but part of a growing pattern of political violence driven by opposition to technology. In my recent book, Stop the Machines: The Rise of Anti-Technology Extremism, I argue that this opposition is likely to become a significant driver of political violence.
While public debate often focuses on concerns such as job displacement from artificial intelligence, anti-technology extremism is rooted in a deeper fear: that technology itself poses an existential threat. The man charged with attacking Sam Altman reportedly outlined what he saw as the existential risks of AI in a manifesto he had on him during the firebombing as well as in a Substack post months earlier.
This is not surprising. As the cutting edge of technological development, AI reflects the convergence of physical, digital, and biological systems, often described as the defining feature of the Fifth Industrial Revolution. It is natural that AI has become a focal point for these anxieties, as the most visible and immediate manifestation of a broader perceived threat.
Addressing the deeper sense of existential dread associated with technology requires a broader shift in how society understands and integrates innovations.
However, it would be misleading to interpret anti-technology extremism as simply anti-AI. At its core lies a broader worldview: technology as a unified, self-perpetuating system. In this view, distinguishing between “good” and “bad” technologies becomes difficult, if not impossible. Medical technology, digital infrastructure, and military systems, to name a few, are seen as components of an overarching structure associated with exploitation, control, and environmental degradation.
This system is viewed as a “mega-machine” that subordinates humans to its logic, reducing them to mere “cogs”. From this standpoint, reforming the system is not an option. The only solution, for adherents of anti-technology extremism, is the dismantling of the technological system itself.
A remarkable characteristic of anti-technology extremism is its flexibility. It is not confined to a single political tradition but intersects with a range of movements, including insurrectionary anarchism, eco-fascism, and eco-extremism. Despite their differences, these milieus share four main characteristics. First, the view of technology as a “mega-machine”. Second, opposition to this system is framed as an impulse to preserve life and the planet. Third, an apocalyptic mindset creates a strong sense of urgency. Whether expressed through fears of the Technological Singularity, AGI, or ecological collapse, there is a shared belief that society stands on the brink of irreversible transformation.
Fourth, anti-technology violence follows a two-pronged strategy. One trajectory includes attacks against symbolic targets, such as technology leaders, companies, and institutions, with the aim of generating attention and sending shockwaves into the tech community. The alleged attack on Sam Altman fits this pattern and echoes earlier similar incidents targeting prominent figures. The second trajectory targets critical infrastructure. Here, the logic is to disrupt or degrade the systems that sustain modern society. Data centres, energy grids, and transportation networks are seen as critical nodes whose destruction could accelerate systemic collapse.

Although anti-technology extremism is gaining visibility, it should not be conflated with broader scepticism towards technology; not all those who oppose technological progress do so violently. A wide range of actors, from civil society groups to policy advocates, are calling for greater regulation, transparency, and accountability in areas such as artificial intelligence. Initiatives like Pause AI or Stop AI reflect genuine public concern, not extremism, and do not necessarily lead to radicalisation. Other movements, such as Anti-Tech Revolution (formerly Anti-Tech Resistance), advocate more radical solutions, including the outright dismantling of the technological system through legal and non-violent means.
Public opinion data points to rising unease, including growing discontent, anxiety, and alienation in the face of technological change. The concerns are real, legitimate, and deserve serious engagement. Blurring the line between such perspectives and violent extremism risks undermining both effective policy responses and democratic debate. Addressing anti-technology extremism requires isolating and condemning violence while engaging constructively with legitimate grievances.
History offers a useful parallel. The 19th-century Luddites, skilled workers threatened by mechanisation, initially sought negotiations, regulation, and protection. Only after repeated failure did some turn to machine-breaking as a form of “collective bargaining by riot”. While the comparison is not exact, it highlights how violence can be avoided. Policy responses should therefore prioritise inclusive governance, transparency, and accountability rather than securitisation.
At the same time, addressing the deeper sense of existential dread associated with technology requires a broader shift in how society understands and integrates innovations. As philosopher Mark Coeckelbergh argues, the central question is not what technology does but what skills and activities it enables. It is up to us, then, to choose the skills and activities that foster engagement with the world and with one another. Reaffirming our agency in our relationship with technology may be a first, crucial step in tackling the root causes that fuel extremist worldviews.
Anti-technology extremism is not a new phenomenon, but its evolution and convergence with emerging technologies make it a growing concern. Understanding its ideological foundations, while maintaining a clear distinction from legitimate critique, will be essential to preventing further escalation.

















































































