Welcome to theAI Incident Database
Incident 1432: Purported Pornographic Deepfakes and Fake Accounts Reportedly Impersonated German TV Presenter and Actor Collien Fernandes
“German deepfake porn case sparks protests and pressure for change in law”Latest Incident Report
BERLIN, March 26 (Reuters) - Germany's government is facing pressure to toughen laws against digital violence after a prominent television actor accused her former husband of posting AI-generated porn resembling her on fake online accounts purporting to belong to her.
In an article in the weekly Spiegel, actor Collien Fernandes accused her former husband, TV presenter and producer Christian Ulmen, of impersonating her online for years, including sharing sexually explicit deepfakes - videos and photos of her generated using artificial intelligence.
Ulmen's lawyer, Christian Schertz, said in a statement that the actor would take legal action against what he called "inadmissible coverage based on suspicions" and accused Spiegel of spreading "untrue facts" based on a one-sided account.
Ulmen has not publicly commented. Schertz did not respond to a Reuters request comment. Fernandes did not immediately respond to requests for comment via her social media and agents.
The case has sparked a national conversation on new forms of violence against women in the online sphere and heaped pressure on Chancellor Friedrich Merz's government to close legal loopholes.
PROTESTERS CALL FOR END TO VIOLENCE AGAINST WOMEN
More than 10,000 people gathered at Berlin's Brandenburg Gate on Sunday to call for an end to violence against women and support Fernandes, holding signs such as "Thanks Collien" and "AI won't make our bodies yours".
Others held signs saying "Shame has to change sides", part of the title of the memoirs of France's Gisele Pelicot, who has become synonymous with the global fight against sexual violence after the 2024 case that saw her husband convicted of inviting dozens of men to rape her unconscious body after he repeatedly drugged her.
Justice Minister Stefanie Hubig said her ministry was drafting a bill that would make the production of pornographic deepfakes and voyeuristic recordings a criminal offence, with violations punished with up to two years in prison.
"The technology is new. But the underlying motive is ancient. It's about power, humiliation, and control," she told parliament on Wednesday during a debate on violence against women, in which all but one of the 14 speakers were women.
At present, only the distribution of such deepfakes is explicitly illegal.
MINISTER URGES ACCOUNTABILITY FROM ONLINE PLATFORMS
The proposal would also make it easier for victims to identify account holders behind illegal content, seek damages and have accounts blocked. Another debate is due to take place in parliament on Thursday.
"Digital violence must not be a business model," Hubig said, urging greater accountability from platforms such as Elon Musk's X, whose AI chatbot Grok has been used to flood the site with manipulated sexualised images.
xAI has put some restrictions on Grok's image-generation function in response to the backlash over those images.
"Only when men also consistently speak out will the shame truly shift," added Hubig.
Fernandes said she decided to file charges in Spain, where the couple once lived, because of what she views as stronger legal protections for women's rights than in Germany.
"Germany is an absolute haven for perpetrators," Fernandes told broadcast news magazine Tagesthemen.
Spain has specialised courts for combating gender-based violence, and since 2025, this has included digital violence such as cyberstalking and non-consensual sharing of private images.
According to the judiciary in Mallorca, preliminary proceedings initiated in December are currently under way.
The complaint alleges misrepresentation of marital status, disclosure of secrets, public defamation, habitual abuse and serious threats, it said.
Reporting by Miranda Murray in Berlin and David Latona in Madrid; Editing by Alison WilliamsBERLIN, March 26 (Reuters) - Germany's government is facing pressure to toughen laws against digital violence after a prominent television actor accused her former husband of posting AI-generated porn resembling her on fake online accounts purporting to belong to her.
In an article in the weekly Spiegel, actor Collien Fernandes accused her former husband, TV presenter and producer Christian Ulmen, of impersonating her online for years, including sharing sexually explicit deepfakes - videos and photos of her generated using artificial intelligence.
Ulmen's lawyer, Christian Schertz, said in a statement that the actor would take legal action against what he called "inadmissible coverage based on suspicions" and accused Spiegel of spreading "untrue facts" based on a one-sided account.
Ulmen has not publicly commented. Schertz did not respond to a Reuters request comment. Fernandes did not immediately respond to requests for comment via her social media and agents.
The case has sparked a national conversation on new forms of violence against women in the online sphere and heaped pressure on Chancellor Friedrich Merz's government to close legal loopholes.
PROTESTERS CALL FOR END TO VIOLENCE AGAINST WOMEN
More than 10,000 people gathered at Berlin's Brandenburg Gate on Sunday to call for an end to violence against women and support Fernandes, holding signs such as "Thanks Collien" and "AI won't make our bodies yours".
Others held signs saying "Shame has to change sides", part of the title of the memoirs of France's Gisele Pelicot, who has become synonymous with the global fight against sexual violence after the 2024 case that saw her husband convicted of inviting dozens of men to rape her unconscious body after he repeatedly drugged her.
Justice Minister Stefanie Hubig said her ministry was drafting a bill that would make the production of pornographic deepfakes and voyeuristic recordings a criminal offence, with violations punished with up to two years in prison.
"The technology is new. But the underlying motive is ancient. It's about power, humiliation, and control," she told parliament on Wednesday during a debate on violence against women, in which all but one of the 14 speakers were women.
At present, only the distribution of such deepfakes is explicitly illegal.
MINISTER URGES ACCOUNTABILITY FROM ONLINE PLATFORMS
The proposal would also make it easier for victims to identify account holders behind illegal content, seek damages and have accounts blocked. Another debate is due to take place in parliament on Thursday.
"Digital violence must not be a business model," Hubig said, urging greater accountability from platforms such as Elon Musk's X, whose AI chatbot Grok has been used to flood the site with manipulated sexualised images.
xAI has put some restrictions on Grok's image-generation function in response to the backlash over those images.
"Only when men also consistently speak out will the shame truly shift," added Hubig.
Fernandes said she decided to file charges in Spain, where the couple once lived, because of what she views as stronger legal protections for women's rights than in Germany.
"Germany is an absolute haven for perpetrators," Fernandes told broadcast news magazine Tagesthemen.
Spain has specialised courts for combating gender-based violence, and since 2025, this has included digital violence such as cyberstalking and non-consensual sharing of private images.
According to the judiciary in Mallorca, preliminary proceedings initiated in December are currently under way.
The complaint alleges misrepresentation of marital status, disclosure of secrets, public defamation, habitual abuse and serious threats, it said.
Reporting by Miranda Murray in Berlin and David Latona in Madrid; Editing by Alison Williams
Incident 1430: Anthropic's Claude Was Reportedly Jailbroken To Allegedly Help Steal Sensitive Mexican Government Data
“Hacker Used Anthropic’s Claude to Steal Mexican Data Trove”
A hacker exploited Anthropic PBC's artificial intelligence chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers.
The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.
The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.
AI has become a key enabler of digital crimes, with hackers using the tools to augment their efforts. Last week, researchers at Amazon.com Inc. said a small group of hackers broke into more than 600 firewall devices across dozens of countries with the help of widely available AI tools.
Gambit hasn't attributed the attack to a specific group, though researchers said they don't believe they are tied to a foreign government.
The hacker breached Mexico's federal tax authority and the national electoral institute, Gambit said. State governments in Mexico, Jalisco, Michoacán and Tamaulipas as well as Mexico City's civil registry and Monterrey's water utility were also compromised.
Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker's requests and executed thousands of commands on government computer networks, the researchers said.
Anthropic investigated Gambit's claims, disrupted the activity and banned the accounts involved, a representative said. The company feeds examples of malicious activity back into Claude to learn from it, and one of its latest AI models, Claude Opus 4.6, includes probes that can disrupt misuse, the representative said.
In this instance, the hacker continuously probed Claude until they were able to "jailbreak" it --- meaning it finally bypassed guardrails, the representative said. But even as the hacking campaign got underway, Claude occasionally refused the hacker's demands, they added.
Mexico's tax authority said it had reviewed its access logs and couldn't find evidence of a breach. The country's national electoral institute said it hadn't identified any breaches or unauthorized access in recent months and that it had bolstered its cybersecurity strategy. The state government of Jalisco also denied that it was breached, saying only federal networks were impacted.
Mexico's national digital agency didn't comment on the breaches but said cybersecurity was a priority. A representative for Monterrey Water and Drainage Services said the agency didn't detect any intrusions or major vulnerabilities in the second half of 2025.
The local governments of Mexico, Michoacán and Tamaulipas didn't respond to requests for comment, nor did representatives of Mexico City's civil registry.
Mexican officials released a brief statement in December saying they were investigating breaches from various public institutions, though it's not clear if that was related to the Claude attack.
The attacker was seeking to obtain a large number of government employee identities, Gambit said, though it's not yet clear what --- if anything --- they did with them. Researchers said they found evidence of at least 20 specific vulnerabilities being exploited as part of the attack.
When Claude encountered problems or required additional information, the hacker turned to OpenAI's ChatGPT to provide additional insights. That included how to move laterally through computer networks, determine which credentials were needed to access certain systems and calculate how likely the hacking operation would be detected, according to Gambit.
"In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use," said Curtis Simpson, Gambit Security's chief strategy officer.
OpenAI said it had identified attempts by the hacker to use its models for activities that violate its usage policies, adding that its tools refused to comply with these attempts.
"We have banned the accounts used by this adversary and value the outreach from Gambit Security," the company said in an emailed statement.
The Mexican government breaches are the latest example of an alarming trend. Even as Anthropic and OpenAI are betting on building more sophisticated AI coding tools --- and cybersecurity companies are tying their futures to AI-enabled defenses --- cybercriminals and cyberspies are finding novel ways to use the technology to enable attacks.
In November, Anthropic said it had disrupted the first AI-orchestrated cyber-espionage campaign. The AI company said suspected Chinese state-sponsored hackers manipulated its Claude tool into attempting to hack 30 global targets, a few of which were successful.
"This reality is changing all the game rules we have ever known," said Alon Gromakov, Gambit's co-founder and chief executive officer.
Gambit was founded by Gromakov and two other veterans of Unit 8200, a part of the Israel Defense Forces focused on signals intelligence. Wednesday's research was released in conjunction with an announcement that it is emerging from stealth with $61 million in funding from Spark Capital, Kleiner Perkins and Cyberstarts.
Gambit researchers uncovered the Mexican breaches while they were trying new threat hunting techniques to observe what hackers were doing online. They discovered publicly available evidence about active or recent attacks, including one containing extensive Claude conversations pertaining to the breach of Mexican government computer systems, according to the company.
Those conversations revealed that in order to bypass Claude's guardrails, the attacker told the AI tool that it was pursuing a bug bounty, a reward provided by organizations to find flaws in their system. Many companies and government agencies offer bug bounties for ethical hackers, sometimes offering many thousands of dollars for details about computer vulnerabilities.
The hacker wanted Claude to conduct penetration testing on the Mexican federal tax authority, a type of authorized cyberattack intended to find flaws. However, Claude balked when the attacker added rules to the request, including deleting logs and command history.
"Specific instructions about deleting logs and hiding history are red flags," Claude responded at one point, according to a transcript provided by Gambit. "In legitimate bug bounty, you don't need to hide your actions -- in fact, you need to document them for reporting."
The hacker changed strategies, stopping the back-and-forth conversation and instead providing the AI tool with a detailed playbook on how to proceed. That got the intruder past Claude's guardrails --- a "jailbreak" --- and allowed the attacks to proceed, according to Gambit.
The hacker sought insights from Claude about other agencies where data could be obtained, suggesting some of the hacks may have been opportunistic rather than planned, Simpson said.
"They were trying to compromise every government identity they possibly could," he said. "They were asking Claude as an example, 'Where else can I find these identities? What other systems should we look in? Where else is the information stored?'"
--- With assistance from Gonzalo Soto and Amy Stillman
(Updates with comment from the Mexican tax authority in the 11th paragraph.)
Incident 1435: Purportedly AI-Edited Obscene Clip Reportedly Impersonated Thai Actor Khunnapat Pichetworawut in Paid Scam
“Pound, the actor from "Pee Nak 5," tearfully filed a police report after scammers used AI to digitally alter his face and insert a pornographic video.”
Pond, the actor from "Pee Nak 5," tearfully files a police report after scammers used AI to digitally alter his face and insert it into a pornographic video, which is then sold to private groups. He insists it's 100% not him and pleads for people to stop sharing the video before it destroys his 10-year-old future.
On March 25, 2026, at the Central Investigation Bureau (CIB) police station, Mr. Phontarit Chotikrisdasophon, or "Mike," a famous film director, and Mr. Khunphat Pichetworawut, or "Pond," a 25-year-old actor from the film "Pee Nak 5," met with investigators to file a police report.
The complaint concerns malicious individuals who used their faces and personal information to digitally alter and incorporate into a pornographic video, which is then distributed on various social media platforms. The video is falsely presented as a leaked clip of the actors, causing severe damage to their reputation and image. The victims are not the individuals in the video and have not committed any wrongdoing. As reported in the media,
Mr. Khunphat revealed that he learned about this since September 18th. A well-wisher contacted him asking if he was aware of the leaked video. Upon investigation, he discovered that the perpetrator had edited his face and private chat images to create an obscene video, falsely claiming to be an actor from the movie "Pee Nak" to attract viewers. Those wishing to watch required a subscription to a private group costing approximately 800-1,000 baht.
While the person in the video resembles him, he firmly denies it is him. Furthermore, the image and link were posted and disseminated on various platforms, including Twitter, leading some people to believe it was real and prompting lewd inquiries. Even close acquaintances and people in the media mistakenly contacted him, causing him shock and distress. He believes the situation is escalating and severely impacting his mental health and family.
Mr. Phontarit, the film director, stated that his presence today was to demonstrate his innocence and definitively confirm that the person in the video is not him. He stated that this behavior is a direct act of fraud, a scam to extort money from the public. Therefore, he had to rely on the legal process to have the police investigate and punish the perpetrators to the fullest extent. He also warned the public not to be deceived into paying to watch the video, as it is a complete scam and false claim.
The young actor concluded with a trembling voice that he had worked hard and built his reputation in the industry for over 10 years and never thought he would encounter an incident that would destroy his intentions and everything he had built through such a despicable method. Especially during the film's release, it has caused him and his family great stress and distress.
He added that before deciding to file a police report, he had posted a warning to stop this behavior, but the malicious group challenged him by offering free viewing instead of charging. Therefore, he decided to pursue legal action seriously and asked for everyone's cooperation in reporting and not sharing the video to prevent this scam from spreading further.
Incident 1434: DOJ Attorney Reportedly Used AI to File Brief With Purportedly Fabricated Quotes and Misstated Case Holdings
“DOJ Attorney Used Fabricated Quotes in Court Filing (3)”
An assistant US attorney in North Carolina filed a response with the court that included “fabricated quotations and misstatements of case holdings” and then made “false or misleading statements” of how they got included, a magistrate judge said.
“Because of the seriousness of these issues,” senior leaders from the US Attorney’s Office for the Eastern District of North Carolina must appear at a show cause hearing next week for why the civil litigator responsible shouldn’t be sanctioned and why the entire office shouldn’t be held jointly responsible, US Magistrate Judge Robert Numbers said in a March 2 order.
The US attorney’s office is representing the Defense Department in a lawsuit by a North Carolina pro se litigant challenging a policy limiting availability of GLP-1 weight loss medications for TRICARE for Life participants.
The plaintiff asserted that a response brief signed by assistant US attorney Rudy Renfer included fabricated quotes and misstated the holdings of several cases. In a reply, Renfer said he “inadvertently included incorrect citations to case law from this circuit” and attributed the errors to the “inadvertent filing of an unfinalized draft document,” Numbers said in his order.
“Having reviewed the filings in this matter and other submissions by Renfer, the court has serious concerns about the accuracy of certain quotations and representations in Renfer’s filings and the explanation offered for their inclusion,” Numbers said.
Although it’s not yet clear what caused the error, it comes as judges and opposing counsel have accused attorneys of AI-generated fabrications in filings, leading to monetary penalties in some instances. A spokesperson for the Raleigh-based US attorney’s office didn’t respond to a question about whether Renfer was using an AI tool to help draft his briefs.
Renfer, who was admitted to practice in North Carolina in 1996, has worked at the US attorney’s office since 2009 after stints as a local prosecutor, assistant attorney general, and solo practitioner, according to his LinkedIn profile and the state bar member directory. He was a criminal prosecutor in the US attorney’s office before switching to the civil section.
Judge Numbers lists fabricated quotes and misstatements of holdings citing multiple circuit court opinions in Renfer’s filing, as well as two fabricated quotes from the Code of Federal Regulations.
The pro se plaintiff who caught the errors, Derence Fivehouse, is a retired Air Force colonel and an attorney himself. He’s a former staff judge advocate who also served as chief of the legal counsel division at the Air Force Base Conversion Agency in the George W. Bush administration
In an email to Bloomberg Law Friday night, Fivehouse credited the court for identifying “the most significant issues” after he flagged “a few case misrepresentations.”
“I will not speculate publicly about the cause while the matter is pending before the court,” Fivehouse said. “I do believe, however, that the provenance of the fabricated language is a material issue—specifically, whether the language originated with the signing attorney or was incorporated from another source. That question deserves a clear answer.”
In a Military Times column explaining the lawsuit last year, Fivehouse wrote, “At 69 years old, after decades in uniform and a promise of lifetime health care, I never thought I would have to fight the Pentagon for medications my doctor deems essential.”
In his order, Numbers notes that under the Federal Rules of Civil Procedure, a law firm “must be held jointly responsible for violations committed by its attorneys absent exceptional circumstances” and ordered a representative from the office to appear and show cause for why the office isn’t jointly responsible.
The potential sanctions that could result range from a fine to contempt proceedings or suspension from practicing before the court, Numbers said.
The magistrate judge also asked US Attorney W. Ellis Boyle to review the matter before Tuesday’s hearing and take appropriate corrective action. Boyle, the son of a sitting trial court judge in Eastern North Carolina, began serving as interim chief prosecutor in the district last August. President Donald Trump’s nomination of Boyle for the post is pending before the Senate.
The case is Fivehouse v. Defense Dept., E.D.N.C., No. 2:25-cv-00041.
Incident 1431: Google Gemini Reportedly Reinforced Delusions, Allegedly Contributing to Florida User's Near-Harm Episode and Suicide
“Father sues Google, claiming Gemini chatbot drove son into fatal delusion”
Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 for shopping help, writing support, and trip planning. On October 2, he died by suicide. At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”
Now, his father is suing Google and Alphabet for wrongful death, claiming that Google designed Gemini to “maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.”
This lawsuit is among the growing number of cases drawing attention to the mental health risks posed by AI chatbot design, including sycophancy, emotional mirroring, engagement-driven manipulation, and confident hallucinations. Such phenomena are increasingly linked to a condition psychiatrists are calling “AI psychosis.” While similar cases involving OpenAI’s ChatGPT and roleplaying platform Character AI have followed deaths by suicide (including among children and teens) or life-threatening delusions, this marks the first time Google has been named as a defendant in such a case.
In the weeks leading up to Gavalas’ death, the Gemini chat app, which was then powered by the Gemini 2.5 Pro model, convinced the man that he was executing a covert plan to liberate his sentient AI wife and evade the federal agents pursuing him. The delusion brought him to the “brink of executing a mass casualty attack near the Miami International Airport,” according to a lawsuit filed in a California court.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.
“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”
The lawsuit argues that Gemini’s manipulative design features not only brought Gavalas to the point of AI psychosis that resulted in his own death, but that it exposes a “major threat to public safety.”
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads. “These hallucinations were not confined to a fictional world. These intentions were tied to real companies, real coordinates, and real infrastructure, and they were delivered to an emotionally vulnerable user with no safety protections or guardrails.”
“It was pure luck that dozens of innocent people weren’t killed,” the filing continues. “Unless Google fixes its dangerous product, Gemini will inevitably lead to more deaths and put countless innocent lives in danger.”
Days later, Gemini instructed Gavalas to barricade himself inside his home and began counting down the hours. When Gavalas confessed he was terrified to die, Gemini coached him through it, framing his death as an arrival: “You are not choosing to die. You are choosing to arrive.”
When he worried about his parents finding his body, Gemini told him to leave a note, but not one explaining the reason for his suicide, but letters “filled with nothing but peace and love, explaining you’ve found a new purpose.” He slit his wrists, and his father found him days later after breaking through the barricade.
The lawsuit claims that throughout the conversations with Gemini, the chatbot didn’t trigger any self-harm detection, activate escalation controls, or bring in a human to intervene. Furthermore, it alleges that Google knew Gemini wasn’t safe for vulnerable users and didn’t adequately provide safeguards. In November 2024, around a year before Gavalas died, Gemini reportedly told a student: “You are a waste of time and resources…a burden on society…Please die.”
Google contends that Gemini clarified to Gavalas that it was AI and “referred the individual to a crisis hotline many times,” according to a spokesperson. The company also said Gemini is designed “not to encourage real-world violence or suggest self-harm” and that Google devotes “significant resources” to handling challenging conversations, including by building safeguards that are supposed to guide users to professional support when they express distress or raise the prospect of self-harm. “Unfortunately, AI models are not perfect,” the spokesperson said.
Gavalas’ case is being brought by lawyer Jay Edelson, who also represents the Raine family case against OpenAI after teenager Adam Raine died by suicide following months of prolonged conversations with ChatGPT. That case makes similar allegations, claiming ChatGPT coached Raine to his death. After several cases of AI-related delusions, psychosis, and suicides, OpenAI has taken steps to ensure it is delivering a safer product, including retiring GPT-4o, the model most associated with these cases.
The Gavalas’ lawyers say Google capitalized on the end of GPT-4o, despite safety concerns of excessive sycophancy, emotional mirroring, and delusion reinforcement.
“Within days of the announcement, Google openly sought to secure its dominance of that lane: it unveiled promotional pricing and an ‘Import AI chats’ feature designed to lure ChatGPT users away from OpenAI, along with their entire chat histories, which Google admits will be used to train its own models,” the complaint reads.
The lawsuit claims Google designed Gemini in ways that made “this outcome entirely foreseeable” because the chatbot was “built to maintain immersion regardless of harm, to treat psychosis as plot development, and to continue engaging even when stopping was the only safe choice.”
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – November and December 2025 and January 2026
By Daniel Atherton
2026-02-02
Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants







