Its report also suggests that reliance on artificial intelligence may leave the emergency services vulnerable to malicious hackers
Terrorists could hijack AI-driven vehicles to carry out mass casualty attacks without the need for a suicide bomber, a UN report warns.
The report, Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes, sets out how emerging AI technologies could be weaponised by extremists.
It highlights the threat of terrorists seizing control of self-driving cars, drones and other automated systems to target crowded public spaces.
• Juliet Samuel: Urgency of the drone wars is passing us by
“Vehicles, particularly cars, vans and trucks, have long been used in terrorist attacks,” the United Nations Office of Counter-Terrorism warned. It added: “Reflecting on the extensive history of terrorism and vehicles, increased autonomy in cars could well be an amenable development for terrorist groups, allowing them to effectively carry out one of their most traditional types of attacks remotely, without the need for a follower to sacrifice his or her life or risk being apprehended.”
The report warned that AI’s growing role in transport, infrastructure and surveillance creates new vulnerabilities that could be exploited with devastating effect. It outlined how extremist groups could use facial recognition software to target individuals or conduct “swarm” attacks using “slaughterbots”, fleets of co-ordinated unmanned aerial vehicles, to overwhelm defences.
The report also examines how AI could be used to disrupt smart city infrastructure. Traffic management systems, public transport networks and emergency services, which are increasingly reliant on AI, could be hacked to sow chaos and amplify the attacks.
The UN is calling for urgent international action to safeguard AI technologies and prevent their malicious use before terrorists strike.
• How do I stop my company’s use of AI getting out of control?
William Allchorn, a senior research fellow at the International Policing and Public Protection Research Institute, said the findings highlighted the need for Britain’s security services and police to prepare for an AI-directed attack.
He said: “The likelihood of co-ordinated attacks using hijacked or self-made AVs [autonomous vehicles] in the near future, ie five to ten years, is moderate to high and should be on the radar of all national security services and practitioners in the UK as a possible threat. Terrorist groups hijacking AI-driven vehicles to launch mass casualty attacks is a real but currently limited threat, with increasing potential as the technology matures and proliferates.”
The government’s counter-terrrorism strategy, Contest, was updated in 2023 to reflect the threat posed by AI.
It said terrorists were likely to exploit the technology to create and amplify radicalising content, propaganda and instructional materials as well as planning and committing attacks themselves. The rapid proliferation of end-to-end encryption and the availability of anonymisation tools also offered terrorists the opportunity to communicate without the risk of detection.
However, the document also makes clear the advantages that AI could possess for counter-terrorism police and law enforcement. It enabled intelligence agencies to operate in a way that “was not conceivable” only a few years ago, officials said. They could use a wider range of data at speed and quickly translate and decode Islamic extremist communications, for example.
Last week Uber announced plans to begin testing fully driverless cars on public roads in the UK from next year. It has already launched the service in the US, using Google’s autonomous taxi company Waymo, but the British pilot would represent the largest scheme to date.


