Welcome to theAI Incident Database
Incident 1414: Purported Gemini-Generated AI Images Reportedly Claimed U.S. Delta Force Soldiers Were Captured by the IRGC
“Images depicting capture of American solders in Iran are AI-generated”Latest Incident Report
Despite US President Donald Trump dismissing the idea of sending ground troops to Iran amid the expanding war in the Middle East, images purporting to show American soldiers captured by the Iranian Revolutionary Guard have spread across social media. However, the supposed pictures were generated by artificial intelligence, as is evidenced by visual inconsistencies and the watermark for Google's Gemini AI tool in each frame.
"Breaking: U.S. Delta Force troops are in the custody of the Iranian Revolutionary Guard," says a March 5, 2026 post sharing the images on X.
Image

Screenshot from X taken March 6, 2026, with AI symbol added by AFP
Similar posts spread across X and other platforms such as Facebook, circulating in English as well as Arabic, Spanish, French and other languages. Some posts claimed the soldiers depicted were the same special forces whom Trump sent in January to capture Venezuelan leader Nicolás Maduro from Caracas.
The images emerged as Iranian Foreign Minister Abbas Araghchi said March 5 that a potential US or Israeli ground invasion "would be a big disaster for them" -- a remark Trump told NBC News was a "wasted comment." The US president said putting troops on the ground in Iran would be a "waste of time" in an interview with the broadcaster March 5.
The war erupted February 28 after US-Israeli strikes killed Iran's supreme leader Ayatollah Ali Khamenei, triggering a wave of retaliatory attacks across the region.
Six US troops were killed by a drone attack in Kuwait as the war broke out, the Pentagon said. But the images purporting to show several more captured by Iranian forces are fake.
Each of the images AFP examined contains the signature sparkle-shaped watermark for Gemini, Google's AI tool, in the lower-right corner.
Image

Screenshot from X taken March 6, 2026, with AI logo and other elements added by AFP
Reverse image searches in Google also returned results saying the visuals were "made with Google AI," and Gemini detected SynthIDs -- invisible watermarks that Google says are meant to identify content generated or edited using its AI tools -- attached to all three of them.
Image

Screenshot from Google taken March 6, 2026
AFP also identified irregularities in the images that are typical of AI-generated fakes, including malformed fingers, blurred faces and inconsistent camouflage patterns and patches on the troops' uniforms. A figure in the background of one image appears to have three arms.
Image

Screenshot from X taken March 6, 2026, with AI logo and other elements added by AFP
AFP has debunked other misinformation about the Middle East war here.
Incident 1415: Nippon Life Alleged ChatGPT Practiced Law Without a License in Illinois Disability Case
“OpenAI Sued for Unauthorized Practice of Law via ChatGPT”
Nippon Life's landmark lawsuit against OpenAI alleges ChatGPT acted as an unlicensed attorney.
Key points:
- Nippon Life Insurance Company of America has sued OpenAI in federal court, alleging ChatGPT engaged in the unauthorized practice of law by helping a former disability claimant reopen a settled case and file dozens of meritless motions.
- The lawsuit --- believed to be one of the first to accuse a major AI developer of unlicensed legal practice through a consumer chatbot --- seeks $300,000 in compensatory damages and $10 million in punitive damages.
- The case arrives as New York legislators advance a bill that would bar AI chatbots from posing as licensed professionals and give affected users a private right of action against AI platforms.
A Chicago federal court is now the stage for what legal observers are calling a landmark test of artificial intelligence's boundaries in the practice of law. Nippon Life Insurance Company of America filed suit on March 4 against OpenAI Foundation and OpenAI Group PBC, alleging that ChatGPT functioned as an unlicensed attorney when it guided a former disability claimant through a series of legal maneuvers after her case had been settled and dismissed with prejudice.
According to the complaint filed in the U.S. District Court for the Northern District of Illinois, the claimant --- Graciela Dela Torre, an employee of a logistics firm insured through Nippon --- uploaded correspondence from her former lawyer into ChatGPT in 2024. The chatbot allegedly validated her concerns, encouraged her to fire her attorney, and helped her pursue reopening a case that had already been resolved. After a judge denied that bid in February 2025, ChatGPT is alleged to have drafted a new lawsuit and dozens of subsequent motions and notices that Nippon contends had "no legitimate legal or procedural purpose."
The insurer's claims include tortious interference with contract, abuse of process, and --- most notably --- violation of Illinois's unauthorized practice of law statute. "ChatGPT is not an attorney," the complaint states bluntly, adding that despite OpenAI's widely publicized demonstrations of ChatGPT passing bar examinations, the platform "has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States." OpenAI responded that "this complaint lacks any merit whatsoever."
The lawsuit marks a significant escalation in AI-related legal liability. Prior litigation against AI companies has largely centered on copyright, privacy, and defamation claims. A lawsuit targeting an AI developer for the unauthorized practice of law through a consumer-facing product breaks new ground, raising questions that bar associations and courts have only begun to address: at what point does an AI tool cross from providing information into providing legal counsel?
The timing is notable. A bill advancing through New York's legislature would explicitly prohibit AI chatbots from giving substantive legal advice --- or advice in any other licensed profession --- and would create a private right of action for users harmed by such conduct. The bill's sponsor, State Senator Kristen Gonzalez, noted that current law contains no explicit prohibition on a large language model representing itself as a lawyer and dispensing legal advice accordingly. The measure, which cleared the Senate's Internet and Technology Committee in February, would also prevent platforms from shielding themselves behind a disclosure that users are interacting with a "non-human chatbot."
For legal departments and law firms, the case reinforces a familiar but urgent message. Courts have already sanctioned lawyers for submitting AI-generated briefs containing hallucinated citations; now the liability exposure extends upstream to the developers of the tools themselves. OpenAI amended its usage policies in October 2024 to bar users from seeking legal advice via the platform --- a change Nippon argues came too late and underscores that the risks were foreseeable. Whether that policy change will carry legal weight is among the questions the Northern District of Illinois may ultimately have to answer.
The case is Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC, No. 1:26-cv-02448 (N.D. Ill.).
Incident 1406: Purported AI-Generated War Footage Reportedly Circulated Widely Online During the Opening Phase of the War in Iran
“Cascade of A.I. Fakes About War With Iran Causes Chaos Online”
A torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran.
The videos --- showing huge explosions that never happened, decimated city streets that were never attacked or troops protesting the war who do not exist --- have added a chaotic and confusing layer to the conflict online.
The New York Times identified over 110 unique A.I.-generated images and videos from the past two weeks about the war in the Middle East. The fakes covered every aspect of the fighting: They falsely depicted screaming Israelis cowering as explosions ripped through Tel Aviv, Iranians mourning their dead and American military vessels bombarded with missiles and torpedoes.
Collectively, they were seen millions of times online through networks like X, TikTok and Facebook, and countless more times within private messaging apps popular in the region and around the world.
The Times identified the A.I. content by checking for both obvious signs --- such as depictions of buildings that do not exist, garbled text and behaviors or movements that defy expectations --- and for invisible watermarks embedded within the files. The posts were also checked with multiple A.I. detector tools and compared with reports from news organizations.
A sophisticated new wave of A.I. tools makes the fakes possible, enabling nearly anyone to create lifelike simulations of war that can deceive the naked eye for little to no cost. Similar content has spread in other conflicts, including the war between Ukraine and Russia. But this war has multiple fronts, and that has led to a proliferation of fake content since the United States and Israel first attacked Iran, according to experts.
"Even compared to when the Ukraine war broke out, things now are very different," said Marc Owen Jones, an associate professor of media analytics at Northwestern University in Qatar. "We're probably seeing far more A.I.-related content now than we ever have before."
Overall, the A.I. fakes included ...
37 fake images and videos falsely depicting active war
5 fake images and videos falsely depicting war preparation
8 fake images and videos falsely depicting destruction
5 fake images and videos falsely depicting crying soldiers
43 memes and overt uses of A.I.
13 other fake images and videos
The content has become a potent informational weapon for Tehran as it seeks to shake the public's tolerance for war by depicting scenes of devastation and destruction across the region. The majority of A.I. videos about the war push pro-Iranian views, often to falsely demonstrate its military superiority and sophistication, according to a study of online activity by Cyabra, a social media intelligence company.
"The use of A.I. images of places in the Gulf --- being burnt or damaged --- becomes more important in Iran's playbook," Mr. Jones said, "because it allows them to give a sense that this war is more destructive and maybe more costly for America's allies than it might actually be."
In one of the most circulated fake videos found online, a shaky handheld scene seemingly shot from an apartment balcony in Tel Aviv shows the skyline pounded with missiles as an Israeli flag sits in the foreground. The video was viewed millions of times across platforms and was picked up by social media influencers and fringe news websites, according to a review of social media activity by The Times.
The Israeli flag in the foreground was one telltale sign that the video was A.I.-generated, experts said. To generate such videos, creators who use A.I. tools will typically write simple text instructions describing, for example, a shaky handheld video of a missile strike on Israel. The A.I. tools will then often include an Israeli flag or the Star of David to fulfill such a request. Several other A.I. videos included the flag.
There is ample genuine footage of the war being shared online, too, with cellphones and social platforms giving a real-time view of the conflict. Many of those images and videos are more subdued than the scenes made by A.I. tools.
Real footage of missile strikes was often shot from far away, typically at night, with missiles visible as little more than bright lights in the distance. Explosions in real videos are more often shown as plumes of smoke, not as fireballs, with bystanders rushing to film the scene only after the munitions meet their target.
Some A.I. videos and images, by contrast, have falsely depicted war like an over-the-top Hollywood action movie, with enormous explosions resulting in mushroom clouds, sonic booms that ripple across unnamed cities and supposed hypersonic missiles that leave glowing streaks in the sky. Real footage is sometimes enhanced by A.I. tools to make explosions appear larger and more devastating, further blurring the line between what is real and fake.
The A.I. footage has essentially created an alternate reality more suited to social media, experts said, where the exaggerated footage is more likely to find an audience.
In one instance, the A.I. fakes played an outsize role in the debate online and between governments over the fate of the U.S.S. Abraham Lincoln, an aircraft carrier deployed to the region. Iran's Islamic Revolutionary Guards Navy initially suggested on March 1 that they had successfully attacked the ship, possibly sinking it. That led to a deluge of A.I.-generated fakes depicting the ship or those like it on fire. Iranian users celebrated the footage online as evidence that their country's counteroffensive was rattling the U.S.-Israeli alliance.
The United States later said that the attack was unsuccessful and that the ship was unharmed.
Dozens of other A.I. images and videos made no effort to hide that they were fake, acting instead as a new form of digital propaganda that brought to life the political arguments typically made by governments or their propaganda arms. Those included flattering depictions of world leaders as powerful men, or dehumanizing depictions of opposition leaders.
One collection of clearly fictional videos offered a view of the Shajarah Tayyebeh elementary school, which was destroyed by the United States in an apparent errant missile strike on Feb. 28, according to a preliminary inquiry. At least 175 people were killed, most of them children, according to Iranian officials.
The A.I.-generated videos unfolded like short films, showing school girls playing outside before an American fighter jet launches missiles.
Social media companies have done little to combat the scourge of A.I. videos that overwhelmed their platforms last year after OpenAI released Sora, a video-generating app that allowed anyone to create realistic fakes through a simple app. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)
Though videos generated by many A.I. tools can include both visible and invisible watermarks labeling them as fake, those are easy to remove or obscure. Only a few of the videos identified by The Times contained such watermarks.
Elon Musk's X, which has taken a broadly permissive approach to allowing misinformation on its platform, announced last week that it would suspend accounts from receiving revenue from the platform for 90 days if they posted A.I.-generated content of "armed conflict" without labeling it as such, in a bid to stop users from profiting off the falsehoods.
But many of the Iranian-linked accounts identified by Cyabra appeared far more focused on spreading its messages than making money.
"This is a natural front for Iran to try and exploit and it feels like this is one of the reasons it is so voluminous," said Valerie Wirtschafter, a fellow at the Brookings Institution studying foreign policy and A.I. "It's actually a tool of war."
Incident 1408: Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements
“Utah woman says pricey supplement endorsed by fake Oprah is actually a common spice”
EAGLE MOUNTAIN --- Lisa Swearingen ordered several bottles of pills that were touted as a science-backed method of weight loss.
"There are four ingredients in this," Swearingen said the advertising told her. "Himalayan pink salt, quercetin, mountain root and burnt berberine."
But when those bottles showed up, she says she discovered the primary ingredient was turmeric -- a common spice -- and very little else.
Swearingen says she called the number on the shipping label and was told she can send the product back. But she's not satisfied.
She paid more than $400 for this shipment of supplements, which she thought came with a powerful endorsement. The one and only, Oprah Winfrey!
"There are many videos of Oprah on TikTok and on Reel," Swearingen said.
She says she called the number on the shipping label and was told she can send the product back. But she's not satisfied and asked me to investigate.
"I received a fraudulent product," she said.
I reached out to the company behind the supplement, Prozenith, to inquire about this matter. I did not hear back.
Digging deeper, it's clear that Lisa Swearingen is not the only frustrated customer.
The Better Business Bureau has logged numerous complaints in its Scam Tracker -- many of them centered around turmeric instead of ingredients touted in testimonials.
Speaking of testimonials, the real Oprah, not some AI deep-fake, took to social media to warn people that her name is being used to pitch weight loss products. Though she doesn't name Prozenith specifically, she does say several different brands are using her likeness without her consent.
"I have nothing to do with weight loss gummies or diet pills," she said in a social media post.
"It made me mad," Swearingen said about the moment she discovered her bottles of Prozenith were chiefly turmeric.
She said she'll ship very expensive turmeric back in the hopes of a refund. But she hopes sharing her experience with me will help other Utahns.
"I feel like lots of people might get taken on this," she said.
Under federal law, if you order a product and pay with a credit card, and what shows up is not what was advertised, you can dispute the charges.
The tricky part can be proving it. In Swearingen's case, for example, she cannot track down the original ad that talked about the four "magic" ingredients.
As for the seller's website, it seems almost deliberately vague about what Prozenith actually is.
Incident 1410: Purportedly AI-Generated Explicit Images of Royal School Armagh Girls Reportedly Circulated Among Pupils
“Armagh grammar school latest to suffer from ‘sexual’ AI deepfake image sharing”
A Co Armagh grammar school is among the latest to be impacted by the spread of AI-generated deepfake images.
A police investigation is now underway after generated sexual images were shared among pupils of The Royal School Armagh, on College Hill.
The principal told The Times that as soon as the school became aware of the issue it had contacted the authorities and will take all appropriate action as advised.
This follows on just two weeks after a Portadown-based GAA club – Tír-na-nOg – warned parents of the dangers of the falsified explicit images after a young member of the community was “blackmailed” for money to avoid the “very realistic” images being circulated online.
The appalling incident in Portadown caught the attention of Newry and Armagh MLA, Justin McNulty who subsequently contacted the Minister for Communities, Gordon Lyons requesting him to detail any actions his Department would be taking to implement safeguards to protect young people in sport from being targeted by AI “deepfakes”.
Mr Lyons explained that while the safety and responsibility of young people and children within our communities is a “shared responsibility by everyone in society”, that addressing “emerging risks” such as AI-generated deepfakes is a “wide-ranging challenge” that will require input and action across multiple areas of government and public services.
With responsibility for safeguarding within sport in Northern Ireland, Sport NI currently has a contract with NSPCC, said the Minister.
Through the contract, Sport NI ensures legislation and best practice with respect to safeguarding is followed and that the sector is appropriately supported.
As part of this contract, the Minister explained that Sport NI will “discuss this matter with NSPCC”.
On the morning of January 15, Elon Musk’s platform X – which has been at the centre of growing controversy for the misuse of its AI model Grok – said it will no longer allow users to alter photos of real people to make them sexually explicit or suggestive in countries where such actions are illegal.
It has also limited access to the feature to paying members only.
In Northern Ireland, SDLP MLA Cara Hunter, for East Londonderry has been largely leading the charge against the spread of deepfake explicit images, ever since she herself fell prey to the illicit image generation, by campaigning for tougher legal action against the creation and sharing of deepfakes.
Following her lobbying a public consultation on the matter was concluded in October 2025 and the findings of that consultation could now feed into the development of future legislation.
On January 13, Ms Hunter also spoke in favour Northern Ireland following suit with the UK Government after their announcement of new laws to crack down on the creation of explicit images using AI.
Technology Secretary, Liz Kendall has said creating, or trying to create, an intimate non-consensual image will be an offence from this week.
Apps allowing users to create these images will also be criminalised under the Crime and Policing Bill in the UK.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – November and December 2025 and January 2026
By Daniel Atherton
2026-02-02
Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants






