Militant groups experiment with AI as risks grow
Experts warn AI could aid recruitment, propaganda and cyberattacks among extremist networks.

Militant groups are experimenting with artificial intelligence, and the risks are expected to grow, according to national security experts and spy agencies. For extremist organizations, AI could be a powerful tool for recruiting new members, generating deepfake imagery, and refining cyberattacks. A post on a pro-IS website last month urged supporters to make AI part of their operations, noting the technology's ease of use and raising concerns about recruitment.
Analysts say IS, which once controlled territory in Iraq and Syria and is now a decentralized network, realized years ago that social media can recruit and spread disinformation. It is testing how generative AI can expand reach: for loosely organized groups or individual actors with web access, AI can produce propaganda or deepfake content at scale, elevating influence and sowing confusion.
Since the advent of widely accessible programs such as ChatGPT, militant groups have increasingly used generative AI to create realistic-looking photos and videos. When combined with social media algorithms, such content can recruit new believers, confuse or frighten opponents and spread propaganda at an unprecedented scale. In public discussions, earlier two years saw fabricated images of bloodied babies amid the Israel-Hamas war; last year, after an IS affiliate attack in Russia, AI-crafted propaganda videos circulated widely.
Researchers at SITE Intelligence Group note IS has created deepfake audio of its own leaders reciting scripture and has used AI to translate messages into multiple languages, illustrating a growing toolkit.
Experts caution that militant groups still lag behind nations such as China, Russia or Iran in the sophistication of their AI use, describing their efforts as aspirational. Marcus Fowler, chief executive of Darktrace Federal, notes that for any adversary, AI makes it easier to do many things, even for small, poorly funded groups.
Still, the risks are not theoretical. Hackers already use synthetic audio and video for phishing campaigns and can deploy AI to craft malicious code or automate elements of cyberattacks. There is also concern that AI could help militant groups produce biological or chemical weapons, a risk highlighted in the Department of Homeland Security's updated Homeland Threat Assessment released earlier this year.
Lawmakers have floated proposals to counter the trend. Sen. Mark Warner, the senior Democrat on the Senate Intelligence Committee, said policymakers should make it easier for AI developers to share information about how their products are being used by bad actors, including extremists, criminals or foreign spies. House lawmakers discussed how to address AI risks during hearings on extremist threats, and legislation that passed the House would require homeland security officials to assess the AI risks posed by such groups each year.
Rep. August Pfluger, the bill’s sponsor, said guarding against the malicious use of AI is no different from preparing for conventional attacks and that policy and capabilities must keep pace with tomorrow's threats.
Experts emphasize that ISIS and other extremist groups have long exploited social media and messaging platforms, and they will continue to pursue the next technology to bolster recruitment and propaganda. As Fowler has noted, ISIS got on Twitter early and has consistently sought the next tool to advance its operations.
Taken together, security officials say monitoring and countering AI-enabled threats requires the same readiness and resilience as traditional risks, with ongoing coordination between government agencies, industry and researchers.