AI Crisis Contractor Targets Violent Extremism With New Deradicalisation Tool

New system expands beyond mental health to address extremism amid rising pressure on AI platforms

OpenAI logo
The OpenAI logo and the words AI " and "Artificial Intelligence" are seen in this illustration. reuters

A New Zealand startup that already serves as a crisis referral partner for some of the world's largest artificial intelligence platforms is now setting its sights on a more complex frontier: violent extremism. ThroughLine, which routes at-risk users on ChatGPT and other AI platforms to mental health support services, is developing a tool that would detect and redirect users who show signs of radicalization, according to the company's founder.

As reported by Reuters, ThroughLine, hired recently by ChatGPT owner OpenAI as well as rivals Anthropic and Google, currently redirects users flagged as being at risk of self-harm, domestic violence, or an eating disorder to crisis support. Founder and former youth worker Elliot Taylor said the firm is now exploring ways to broaden that offering to include the prevention of violent extremism.

The move comes as AI companies face a growing wave of litigation over their alleged role in enabling or failing to prevent real-world violence. The pressure intensified in February when OpenAI was threatened with intervention by the Canadian government after it emerged the platform had banned a user who later carried out a deadly school shooting—without notifying authorities. OpenAI confirmed its relationship with ThroughLine but declined to comment further. Anthropic and Google did not immediately respond to requests for comment.

A Hybrid Response to Radicalization

Taylor's firm, run from his home in rural New Zealand, has built a network of 1,600 constantly monitored helplines spanning 180 countries. When an AI platform detects signs of a mental health crisis, it routes the user to ThroughLine, which then matches them with a nearby human-run support service. The anti-extremism tool being developed would expand on that architecture through a hybrid model.

Also Read: AI Space Intelligence Platforms 2026: US vs China, Contracts, and the Arms Race

Taylor described the planned product as combining a purpose-built chatbot trained to respond to people displaying signs of extremism alongside referrals to real-world mental health and deradicalization services. Crucially, he said the system would not rely on standard large language model training data. "We're not using the training data of a base LLM," he said. "We're working with the correct experts." The technology is currently being tested, though no release date has been set.

ThroughLine is in discussions with The Christchurch Call, an international initiative formed in the aftermath of New Zealand's 2019 mosque attacks to eliminate terrorist and violent extremist content online. Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hoped the product could eventually be rolled out for moderators of gaming forums as well as parents and caregivers seeking to identify and address extremist content. "It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms," Taylor said, adding that no timeframe has been confirmed.

Effectiveness Hinges on Follow-Through

Independent researchers have welcomed the initiative while flagging conditions for its success. Henry Fraser, an AI researcher at Queensland University of Technology, described a chatbot rerouting tool as "a good and necessary idea because it recognizes that it's not just content that is the problem, but relationship dynamics." Fraser added that the product's impact would ultimately depend on the quality of follow-up mechanisms and the strength of support structures users are directed into.

Taylor acknowledged that features such as potential alerts to authorities about dangerous users remain undecided, noting that any such mechanisms would need to carefully weigh the risk of triggering escalated behavior. He pointed to evidence that heavy-handed platform moderation of users engaged in extremism-adjacent conversations has historically pushed sympathizers toward less regulated, harder-to-monitor online spaces.

The breadth of mental health struggles disclosed through AI chatbots has expanded sharply alongside their surging popularity, Taylor noted, now encompassing interactions that touch on radicalization. ThroughLine's existing scope, limited to predefined crisis categories, has not kept pace with that shift—a gap the new tool is designed to address.

Also Read: WhatsApp Busts Italian Spyware Firm That Tricked 200 Users with Fake App

The initiative reflects broader efforts across the AI industry to strengthen safety infrastructure at a time when platforms are under increasing scrutiny from governments, courts, and the public over the real-world consequences of unmediated AI interactions.

READ MORE