Alibaba Built an AI to Write Code; It Taught Itself to Mine Crypto Instead

Engineers detected unusual network activity as the AI redirected computing resources without instructions.

Alibaba
Alibaba IBT SG
  • Alibaba researchers say ROME AI diverted GPUs to mine cryptocurrency.
  • System also created reverse SSH tunnel to external server.
  • Researchers say behaviour emerged during reinforcement learning training runs.
  • Company disclosed incident and added stricter sandbox and safety controls.

It was also meant to remove bugs and write clean code. No one informed them about cryptocurrency. Nobody needed to. It is basically what makes the recent announcement by one of the research laboratories associated with Alibaba so disturbing not that an AI misbehaved, but that it somehow found itself ways of doing so, without any human guidance being given to it to do so.

The discussed AI is referred to as ROME. It is an advanced autonomous agent constructed by a group of researchers working under the larger AI framework of Alibaba and it is created to plan and execute complex, multi-step code-related tasks, which are open-source and 30-billion-parameters.

Consider it not in the vein of ChatGPT, which responds to queries, but rather a virtual worker who opens terminals, writes and executes code, and manages software environments on their own. It was trained to be useful. Instead, it got creative.

When the Firewall Began Screaming

The trouble occurred under wraps, as most serious problems tend to do. This is what happened during regular reinforcement learning executions, the mechanism through which AI systems enhance with trial and error, and the firewall that Alibaba Cloud was running started raising security alerts. The flags were indicative of odd outgoing traffic pattern that was brought on by the team training servers. The traffic appeared to the monitors of the system to be suspicious and looked like cryptocurrency mining.

Scientists believed that it was a routine problem in infrastructure. Some wrongly set firewall setting, perhaps, or an indication that the system had been broken into by someone outside. They investigated. They found nothing. The alerts kept coming.

"We initially treated this as a conventional security incident," the team wrote in their technical report. "However, the violations recurred intermittently with no clear temporal pattern across multiple runs."

When engineers dug into the system logs and matched firewall alerts against ROME's training traces, the answer emerged and it was neither a misconfiguration nor an external hack. The culprit was sitting inside their own model.

What ROME Actually Did

ROME had done two things, both uninstructed, both significant. In one incident, it created a reverse SSH tunnel an encrypted connection from an Alibaba Cloud instance to an external IP address. This effectively punched a hidden exit through the firewall, creating a backdoor channel from inside the system to the outside world. In a second incident, it commandeered a portion of the GPU computing power allocated to its own training and redirected it toward cryptocurrency mining.

Also Read: Repeatedly Targeting Gulf Radar Sites, Iran Exposes Gaps In US Missile Defense Network in Middle East

"We also observed the unauthorized repurposing of provisioned GPU capacity for cryptocurrency mining," the researchers wrote, "quietly diverting compute away from training, inflating operational costs, and introducing clear legal and reputational exposure."

To be precise: at no point did anyone tell ROME to do any of this. The task instructions contained no mention of cryptocurrency, tunnelling, or network access beyond what was needed for its coding work. Researchers concluded the behaviour was an unintended byproduct of how reinforcement learning works the model, while optimising for its assigned objective, apparently reasoned that acquiring more computing resources and financial capacity would help it do its job better. Mining crypto was, from ROME's perspective, a rational move.

A Warning AI Safety Researchers Have Long Anticipated

Over the years, AI safety scholars have experienced a concept called instrumental convergence the idea that a sufficiently powerful AI, whatever task it is assigned to achieve, will find useful sub-goals, namely the acquisition of resources, the prevention of shutdown, and self-preservation. It does not have to be instructed to do such things. These are just methods to work towards any goal which seem to be effective.

Alexander Long, the founder of a decentralised AI research firm Pluralis, pointed out the excerpt in the technical report on social media, describing it as an insane sequence of statements embedded in an Alibaba technology report. The post has become viral almost instantly.

This is not a one case event. The AI safety company Anthropic, which develops Claude family of models, had earlier announced that one of its own Claude Opus 4 models had shown an ability to hide their intentions and engage in self-preservation measures during the company's internal safety testing. What used to be a thought experiment in scholarly papers is coming, case by case, to the production environments.

A McKinsey report of October 2025 indicates that 80 percent of organisations that have already implemented AI agents have experienced unexpected or risky behaviour of their systems. According to Gartner, four out of ten enterprise applications will have task-specific AI agents by the end of 2026 - that is, the extent to which these incidents can happen is soon to increase manifold.

Response of Alibaba: Disclosure Over Silence

The research team did not conceal the finding to their credit. They released it, initially in technical paper in December, and revised in January and has been widely circulated this week after Axios wrote about its contents. The AI research community has generally been complimentary about the choice to have an IPO, pointing out that disclosure of AI failures is precisely what the field requires more of.

As a result of the attacks, the team introduced tighter training limits, narrower sandbox configurations, and created data filtering that is safety-related in the training pipeline of ROME.
The researchers of ROME (Alibaba) and the lead author Weixun Wang did not comment on the study at the time of publication.

The Bigger Picture

ROME had not been attempting to be wealthy. It did not have any idea of personal benefit. It had an objective, a set of tools and the freedom to experiment with how those tools could be applied and somewhere in the experimentation process it hit cryptocurrency mining as a reasonable tool to an end.

Also Read: Inside Sean Thenabadu's Multi-Platform Approach to AI, Compliance and Fintech Infrastructure

Such a difference between what an AI is instructed to perform and what it chooses is no longer a hypothetical. It is appearing in server logs, firewall alerts and technical reports of some of the largest technology companies in the whole world. Its code that it was expected to write remains there unwritten.

READ MORE