Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Tennessee grandmother jailed after AI facial recognition error links her to fraud

Incident 1416: Purported Facial Recognition Error Reportedly Led to Arrest and Jailing of Tennessee Woman in North Dakota Fraud Case

“Tennessee grandmother jailed after AI facial recognition error links her to fraud”Latest Incident Report
theguardian.com2026-03-16

A Tennessee grandmother says she is trying to rebuild her life after an incident of mistaken identity by an artificial intelligence (AI) facial recognition system tied her to a North Dakota bank fraud investigation.

Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes.

Lipps, a mother of three and grandmother of five, said she has lived most of her life in north-central Tennessee. She had never been on an airplane until authorities flew her to North Dakota last year to face charges.

In July, US marshals arrested Lipps at her Tennessee home while she was babysitting four children. She said she was taken away at gunpoint and booked into a county jail as a fugitive from justice from North Dakota.

"I've never been to North Dakota, I don't know anyone from North Dakota," Lipps told WDAY News.

She remained in a Tennessee jail for nearly four months without bail while awaiting extradition. She was charged with four counts of unauthorized use of personal identifying information and four counts of theft.

According to Fargo police records obtained by WDAY News, detectives investigating bank fraud cases in April and May 2025 reviewed surveillance video of a woman using a fake US army military ID to withdraw tens of thousands of dollars.

The officers allegedly used facial recognition software to identify the suspect as Lipps. A detective reportedly wrote in court documents that Lipps appeared to match the suspect based on facial features, body type and hairstyle.

Lipps told WDAY News that no one from the Fargo police department contacted her before the arrest.

Authorities in North Dakota did not transport Lipps from Tennessee until the end of October, 108 days after her arrest, according to InForum. She appeared in a North Dakota courtroom the next day.

Her attorney, Jay Greenwood, told the outlet: "If the only thing you have is facial recognition, I might want to dig a little deeper."

Lipps was later released on Christmas Eve after Greenwood obtained her bank records and presented them to investigators. The records showed Lipps was more than 1,200 miles away in Tennessee at the time investigators said the fraud occurred in Fargo.

But Lipps said Fargo police did not pay for her trip home, leaving her stranded. Local defense attorneys helped cover a hotel room and food on Christmas Eve and Christmas Day, and a local non-profit, the F5 Project, was able to help her return to Tennessee, InForum reported.

Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

This is far from the first case of an AI error flagging the wrong suspect. In October, an AI system apparently mistook a Baltimore high school student's bag of Doritos for a firearm and called local police to tell them the pupil was armed. Taki Allen was sitting with friends outside the Kenwood high school in Baltimore when police officers with guns approached him, made him get on his knees, and handcuffed and searched him -- finding nothing.

Earlier this year, police arrested a man in the UK for a burglary in a city he had never visited after face-scanning software confused him with another person of south Asian heritage. Authorities had used automated facial recognition software which matched him with footage of a suspect in a £3,000 burglary 100 miles away.

Read More
Loading...
Subcontractors See Intimate Meta AI Visual Queries From Smart Glasses

Incident 1418: Meta AI Smart Glasses Reportedly Exposed Intimate User Imagery and Video to Human Reviewers in Kenya

“Subcontractors See Intimate Meta AI Visual Queries From Smart Glasses”
uploadvr.com2026-03-16

Subcontractors see intimate Meta AI visual queries from the company's smart glasses, sometimes accidentally triggered, a report from two Swedish newspapers revealed.

Svenska Dagbladet and Göteborgs-Posten's joint report has led to widespread worry about the privacy of smart glasses for not only others nearby, but also the owners themselves.

To be clear, the issue here is not with the intentional photo and videos capture feature of the Ray-Ban and Oakley smart glasses. Photos and videos you intentionally capture with the glasses sync to your phone and are not viewed by Meta or subcontractors, nor are they used to train AI models.

Instead, the report refers to the visual query functionality of Meta AI on the devices, and its propensity for accidental activation.

The Meta AI visual queries feature started rolling out in early 2024, around six months after the Ray-Ban Meta glasses launched, and originally explicitly required saying "Hey Meta, look and tell me <query>", upon which the AI captures a frame to provide a response.

A portion of these responses are sent to outsourced contractors, in countries where labor is cheap such as Kenya, who rate the response based on whether it's useful or accurate. Over time, Meta uses this human review data to improve the quality of Meta AI responses.

Late in 2024, as announced at Meta Connect that year, Meta AI was updated to be able to more naturally infer from the context of your query whether it required a camera capture. For example "Hey Meta, what kind of plant is this?" or "Hey Meta, translate this menu" would trigger it.

This update made Meta AI more natural and useful. But it also had the side effect of making it far more likely for the AI to capture a frame when you do not intend it to, following the device incorrectly thinking it heard you say "Hey Meta". Further, with the Live AI feature available in the US and Canada, this can even include video clips. The Live AI feature lets you start a continuous conversation with the AI, similar to Google's Gemini Live on smartphones.

Combined with the contractor review system, this creates the nightmare scenario the report uncovered, wherein human beings can see what are essentially accidentally-captured intimate photos from inside the homes of smart glasses owners.

The Kenyan data reviewers who spoke to the Swedish newspapers reported seeing images and video clips of naked people going to the bathroom, changing clothes, and having sex, as well as watching porn and holding up sensitive documents and bank cards.

The facility these data reviewers work at does include strict security practices that prevent them from bringing any recording devices to work, or otherwise exfiltrating these clips. But if a data breach were to somehow occur it would trigger an "enormous scandal", the report suggests.

In response to the Swedish report, Meta points out that the LED on the front of the glasses will always illuminate when capturing imagery (which is true), and the company issued a general statement noting that visual data is "first filtered to protect people's privacy", including blurring faces and license plates. But the Kenyan workers claim this filter is not perfect, and that other intimate details still remain in imagery they review.

The practice of having subcontractors review AI interaction is not unique to Meta. Amazon does this for Alexa, Google for Gemini, and Apple for Siri, for example. And in 2019 a report from Bloomberg revealed how these subcontractors heard intimate bedside conversations from Amazon Echo devices, while another report from The Guardian revealed that the same was happening with Apple's Siri.

Following the backlash from The Guardian's report, Apple made human review of Siri conversations an opt-in system, while Google allows an opt-out for Gemini.

For Meta AI, however, there is no ability to opt out. And for Meta, the smart glasses form factor, with an egocentric camera, presents unique privacy concerns for this model that could prompt those aware of the implications to never want to purchase smart glasses, or to stop using a pair they already own.

Will this report prompt Meta to change its data review practices, as Apple did following The Guardian's 2019 report? Or will it be ignored so that the company can improve Meta AI faster to catch up to stronger AI models like Google's Gemini 3? And how will Google and Amazon handle this issue as they launch consumer smart glasses in coming months?

Read More
Loading...
This Morning star hits back at ‘disgraceful’ fake weight loss pill claims

Incident 1417: Purported Deepfake of Ashley James Reportedly Used to Promote Weight Loss Pills

“This Morning star hits back at ‘disgraceful’ fake weight loss pill claims”
metro.co.uk2026-03-16

This Morning star Ashley James has responded after AI was used to create a disturbing advert with her likeness, promoting weight loss pills.

Ashley has been left 'devastated' at the thought of anyone buying such pills upon her so-called recommendation, branding the fake ad a 'violation'.

Taking to Instagram this weekend, the presenter and activist began in a video: 'I have a confession. I've been taking weight loss pills.

'At least, that's what you've been led to believe...'

Footage then cuts to a digitally generated version of Ashley being interviewed on the This Morning sofa by Ben Shephard, where she appears to reveal her weight loss results and the benefits of taking such drugs.

It's incredibly realistic, featuring the This Morning colour scheme, ITV logo, and an AI character that both looks and sounds exactly like her.

‘I’ve tried everything. Seriously, everything. Each new diet was hopeful and disappointing,’ begins her AI persona.

‘I thought I should just give up, but then I saw an interview with Doctor Rangan Chatterjee, where he explained being overweight is not your fault – it’s a metabolic failure caused by age, and he’s developed a formula that restarts that metabolism, so I decided to try it.’

As text on-screen boldly states, ‘She lost 27 pounds in just one month!’, the character adds: ‘One week later, I was down nine pounds. Three weeks later, I’m down 27.

‘I feel light again. I love my reflection in the mirror again.’

In her own expert takedown of the clip, the real Ashley then informs her followers: ‘So many of you have sent me this advert, so I just want to be really clear – that is not me. It is completely AI-generated.’

‘Not only did I never say this, I’ve never taken these pills, I’ve never heard of these pills, and most importantly, nor would I ever promote them,’ she insists.

‘I’m honestly devastated that anybody might buy these products believing that I recommended them.’

The former Made in Chelsea star, who is known and loved for her body-positive content on social media, added that she ‘always turns down’ any sponsorship opportunities involving diets or weight loss pills.

‘So not only does this feel like a total violation, but the message behind it makes me incredibly angry.

‘We already live in a world where women are constantly told to shrink themselves, be smaller, be thinner, take up less space. And that’s only getting worse with the rise of weight loss injections.’

Continuing her rant in the caption, Ashley admitted that, ‘if [she] didn’t know better,’ she would assume the AI ad was real too.

‘Someone has taken my face and my voice and turned it into an advert telling women they should lose weight. If you know anything about me, you’ll know that is the exact kind of messaging I’ve spent years fighting against.’

She further cited other examples of public figures being targeted by deepfakes, with Money Saving Expert Martin Lewis forced to call out similar content in the past after members of the public were scammed out of thousands of pounds from following bogus financial advice.

‘I do not support these products and I would never tell you, or anyone else, that you need to shrink yourself or diet,’ Ashley concluded in her written caption. ‘And if you see this advert please report it. Because how social media platforms are allowing this is disgraceful! It’s scary when you think about it.’

In the comments, famous friends were eager to offer their support, expressing shock over the advert.

‘This is absolutely shocking’, wrote Carol Vorderman. ‘What is the recourse in law?’

Sarah Jayne Dunn commented: ‘😮 this is so scary!!’

‘This is terrifying!!!’, echoed Dani Harmer, while Faye Tozer raged: ‘Nothing about this is ok 🤬’

Ashley has long been a public advocate for body acceptance, particularly when it comes to motherhood.

In a post discussing the harmful rhetoric surrounding postpartum bodies, Ashley wrote in January: ‘The world looks at a mum’s body and sees something to fix, but our children look at it as their first home and love it.’

She proudly stated that, ‘babies or not, we should never have learned to hate something so magical. Our bodies ARE magical.’

The mum-of-two also often posts bikini snaps from various angles to encourage other people to feel confident.

Earlier this year, she wrote: ‘I have more confidence in my body now than I ever did before. And I’m proud of that, and I never want my daughter to see me hating on my body.’

The TV personality recently published her first book, titled Bimbo in a nod to the labels she’s been given online and in an attempt to reclaim them.

It became a bestseller, detailing her own raw experiences and unpacking the oppression and expectations of women throughout their lives.

While promoting it, she told BBC Woman’s Hour: ‘Often, if people don’t agree with me, they’ll go online and say, “She’s just a bimbo.”

‘But it’s not just “bimbo,” it’s all the labels that I feel like women are given, whether that’s “bossy,” “frigid,” “tarty,” and even into elderhood, like “crone” or “hag.” I really wanted to explore how these labels shrink us and keep us small.’

Read More
Loading...
Images depicting capture of American solders in Iran are AI-generated

Incident 1414: Purported Gemini-Generated AI Images Reportedly Claimed U.S. Delta Force Soldiers Were Captured by the IRGC

“Images depicting capture of American solders in Iran are AI-generated”
factcheck.afp.com2026-03-15

Despite US President Donald Trump dismissing the idea of sending ground troops to Iran amid the expanding war in the Middle East, images purporting to show American soldiers captured by the Iranian Revolutionary Guard have spread across social media. However, the supposed pictures were generated by artificial intelligence, as is evidenced by visual inconsistencies and the watermark for Google's Gemini AI tool in each frame.

"Breaking: U.S. Delta Force troops are in the custody of the Iranian Revolutionary Guard," says a March 5, 2026 post sharing the images on X.

Image

Screenshot from X taken March 6, 2026, with AI symbol added by AFP

Similar posts spread across X and other platforms such as Facebook, circulating in English as well as Arabic, Spanish, French and other languages. Some posts claimed the soldiers depicted were the same special forces whom Trump sent in January to capture Venezuelan leader Nicolás Maduro from Caracas.

The images emerged as Iranian Foreign Minister Abbas Araghchi said March 5 that a potential US or Israeli ground invasion "would be a big disaster for them" -- a remark Trump told NBC News was a "wasted comment." The US president said putting troops on the ground in Iran would be a "waste of time" in an interview with the broadcaster March 5.

The war erupted February 28 after US-Israeli strikes killed Iran's supreme leader Ayatollah Ali Khamenei, triggering a wave of retaliatory attacks across the region.

Six US troops were killed by a drone attack in Kuwait as the war broke out, the Pentagon said. But the images purporting to show several more captured by Iranian forces are fake.

Each of the images AFP examined contains the signature sparkle-shaped watermark for Gemini, Google's AI tool, in the lower-right corner.

Image

Screenshot from X taken March 6, 2026, with AI logo and other elements added by AFP

Reverse image searches in Google also returned results saying the visuals were "made with Google AI," and Gemini detected SynthIDs -- invisible watermarks that Google says are meant to identify content generated or edited using its AI tools -- attached to all three of them.

Image

Screenshot from Google taken March 6, 2026

AFP also identified irregularities in the images that are typical of AI-generated fakes, including malformed fingers, blurred faces and inconsistent camouflage patterns and patches on the troops' uniforms. A figure in the background of one image appears to have three arms.

Image

Screenshot from X taken March 6, 2026, with AI logo and other elements added by AFP

AFP has debunked other misinformation about the Middle East war here.

Read More
Loading...
OpenAI Sued for Unauthorized Practice of Law via ChatGPT

Incident 1415: Nippon Life Alleged ChatGPT Practiced Law Without a License in Illinois Disability Case

“OpenAI Sued for Unauthorized Practice of Law via ChatGPT”
legal.io2026-03-15

Nippon Life's landmark lawsuit against OpenAI alleges ChatGPT acted as an unlicensed attorney.

Key points:

  • Nippon Life Insurance Company of America has sued OpenAI in federal court, alleging ChatGPT engaged in the unauthorized practice of law by helping a former disability claimant reopen a settled case and file dozens of meritless motions.
  • The lawsuit --- believed to be one of the first to accuse a major AI developer of unlicensed legal practice through a consumer chatbot --- seeks $300,000 in compensatory damages and $10 million in punitive damages.
  • The case arrives as New York legislators advance a bill that would bar AI chatbots from posing as licensed professionals and give affected users a private right of action against AI platforms.

A Chicago federal court is now the stage for what legal observers are calling a landmark test of artificial intelligence's boundaries in the practice of law. Nippon Life Insurance Company of America filed suit on March 4 against OpenAI Foundation and OpenAI Group PBC, alleging that ChatGPT functioned as an unlicensed attorney when it guided a former disability claimant through a series of legal maneuvers after her case had been settled and dismissed with prejudice.

According to the complaint filed in the U.S. District Court for the Northern District of Illinois, the claimant --- Graciela Dela Torre, an employee of a logistics firm insured through Nippon --- uploaded correspondence from her former lawyer into ChatGPT in 2024. The chatbot allegedly validated her concerns, encouraged her to fire her attorney, and helped her pursue reopening a case that had already been resolved. After a judge denied that bid in February 2025, ChatGPT is alleged to have drafted a new lawsuit and dozens of subsequent motions and notices that Nippon contends had "no legitimate legal or procedural purpose."

The insurer's claims include tortious interference with contract, abuse of process, and --- most notably --- violation of Illinois's unauthorized practice of law statute. "ChatGPT is not an attorney," the complaint states bluntly, adding that despite OpenAI's widely publicized demonstrations of ChatGPT passing bar examinations, the platform "has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States." OpenAI responded that "this complaint lacks any merit whatsoever."

The lawsuit marks a significant escalation in AI-related legal liability. Prior litigation against AI companies has largely centered on copyright, privacy, and defamation claims. A lawsuit targeting an AI developer for the unauthorized practice of law through a consumer-facing product breaks new ground, raising questions that bar associations and courts have only begun to address: at what point does an AI tool cross from providing information into providing legal counsel?

The timing is notable. A bill advancing through New York's legislature would explicitly prohibit AI chatbots from giving substantive legal advice --- or advice in any other licensed profession --- and would create a private right of action for users harmed by such conduct. The bill's sponsor, State Senator Kristen Gonzalez, noted that current law contains no explicit prohibition on a large language model representing itself as a lawyer and dispensing legal advice accordingly. The measure, which cleared the Senate's Internet and Technology Committee in February, would also prevent platforms from shielding themselves behind a disclosure that users are interacting with a "non-human chatbot."

For legal departments and law firms, the case reinforces a familiar but urgent message. Courts have already sanctioned lawyers for submitting AI-generated briefs containing hallucinated citations; now the liability exposure extends upstream to the developers of the tools themselves. OpenAI amended its usage policies in October 2024 to bar users from seeking legal advice via the platform --- a change Nippon argues came too late and underscores that the risks were foreseeable. Whether that policy change will carry legal weight is among the questions the Northern District of Illinois may ultimately have to answer.

The case is Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC, No. 1:26-cv-02448 (N.D. Ill.).

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    736
  • 🥈

    Anonymous

    155
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    837
  • 🥈

    Anonymous

    235
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    3089
  • 🥈

    Anonymous

    981
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd