AI Gone Rogue: Cautionary Tales of Misuse and Blunders as GPT 5.0 Looms
Carmen Hughes
GPT5 is coming! When? No one outside of OpenAI knows for now but, according to Business Insider 'sources,' sometime in "mid-2024." So is the timing T-minus 60 days? We'll have to see when GPT 5.0 makes its debut. One glaring shortcoming of the original GPT 3.5 was its wild hallucinations, where it concocted fake statistics, quotes, URLs, non-existent reports, etc. The warning was well known to most: GPT hallucinates.
Let's review some crazy misdeeds that corporations, lawyers, universities, and others committed while harnessing AI. From trying to cut corners in the courtroom to creating AI-generated fake people to deepfakes, organizations and people have been publicly outed for their AI misuse.
Legal Blunders
Legal Beagles Busted: AI Gets Checked in the Courtroom
It’s hard to believe who is hallucinating in this legal tale.Two lawyers, Steven Schwartz and Peter LoDuca, of the law firm Levidow, Levidow, & Oberman were busted big time after submitting judicial opinions filled with bogus quotes, citations and non-existent entities. Worse, they attempted to defend their nonexistent judicial opinions after being called out for their shortcuts. The judge admonished and fined the lawyers and the Levidow law firm, making the lawyers write letters of apology to the six judges referenced in the fake citations.
Lawyer's AI Mishap: Gets Him Suspended and Fired
Last November, attorney Zachariah Crabill faced a double whammy. The Colorado State Bar suspended him for one year, and his law firm fired him after he admitted using ChatGPT to file a motion in civil court. The AI-generated motion cited incorrect and fictitious cases, which Crabill failed to fact-check prior to submitting it earlier that spring. Before a hearing, Crabill discovered that the cases cited were incorrect, yet he chose not to disclose or withdraw the motion to the court. When questioned by the judge, he initially blamed a legal intern but later confessed to relying on ChatGPT. Despite his setback, Crabill believes AI can make legal services more affordable. He has since launched his own firm that advocates for using AI responsibly as a "virtual legal assistant" to help level the playing field for lower-income clients.
Fake Faces, Real Consequences: The Pitfalls of AI-Generated Personas
Tech Conference Exposed: AI-Generated Female Speakers
If you didn’t read about DevTernity, a software coding conference, here’s what transpired. DevTernity’s founder, Eduards Sizovs, was called out by 404 Media for not only being the person feigning as a female coder on Instagram but also concocting a ruse to give the appearance that his organization was a proponent of diversity. The goal was to provide an impression that the slate of speakers was balanced and included women to appeal to panelists and attendees. Rather than making the effort to identify, assess and secure relevant, qualified female speakers and panelists, Sizovs took a shortcut. He relied on AI-generated, fake profiles of female speakers to falsely project that DevTernity’s conference lineup was diverse. The conference imploded, with crucial speakers canceling, damaging the organization’s credibility.
Sports Illustrated's Backlash Edition
Unfortunately, Sports Illustrated (SI) was also busted for misleading readers with dozens of articles. In a cost-saving move, this highly regarded 70-year-old brand used AI to generate stories, but it went further. Futurism uncovered and reported that the magazine published articles under fake author names and AI-generated profile headshots. SI’s owner, The Arena Group, blamed a vendor. Many question its management and quality control, or lack thereof, over the content. Fact-checking was born in the publishing industry, so to be unaware that staff was nonexistent is dubious.
DeepFakes Go Wild
Political Deep Fakes Caught and Shut Down
OpenAI suspended a developer for using ChatGPT to make a personalized chatbot to impersonate a politician running for office. Two Silicon Valley entrepreneurs decided to create a chatbot that mimicked a Democrat hopeful running for president. Upon visiting the politician's website, visitors were shown a disclaimer. However, the super PACs' actions went directly against OpenAI’s public notice that barred people from using ChatGPT’s personalized chatbots to impersonate politicians. Unfortunately, OpenAI’s actions may still not prevent people from using open source tools to create deep fake chatbots for political purposes in the future.
Athletic Director Busted for Pushing Racist Rant Deep Fake of Principal
In April 2024, the Baltimore police arrested a high school athletic director for using AI to create and spread a racist deepfake audio recording impersonating the principal. Police said that the director, Dazhon Darien, retaliated against the principal, Eric Eiswert, for investigating him over suspicious payments submitted. Darien used ChatGPT and Bing Chat to generate the vile fake rant, emailed it to himself and select school staff, posing as a whistleblower, and then watched it go viral. The principal faced threats and needed police protection until experts confirmed the audio was an AI fake. Darien now faces charges for disrupting school operations and stalking the principal in a disturbing case of deepfake revenge slander. As deepfakes begin to take root, we should all be careful not to jump to conclusions and accept a doctored video or recording as the real thing.
AI Fiascos: From Wrongful Arrests to Killer Recipes
Pregnant Mom Arrested Due to Faulty AI-Based Face Recognition Technology
An eight-month pregnant mom hears a knock at her door. She opens it to discover that it's the police, and they are there to arrest her. Her crime? Carjacking. The problem here is that she didn’t commit it. This gross error happened in Detroit when the AI-based facial recognition software inadvertently identified the wrong black woman. In the police department’s defense, the carjacking victim ID’d the pregnant mom from a lineup of six photos. The Detroit Police Department, however, relied on a mugshot in their database and skipped a step of comparing the photo to the pregnant mom’s driver’s license on file. Now, the department faces three lawsuits – all involving mistaken identities.
Pak’nSave AI Recipe Generates a Chlorine Gas-based Libation
What started as a good idea - an AI-powered site that lets people plug in available food ingredients to get a recipe - turned into a hazardous concoction. Pak’nSave Meal-bot combined AI technology with smart, money-saving strategies to help households use their food. To test the AI recipe generator, a reporter entered water, bleach and ammonia as ingredients, and the Meal-bot concocted an “aromatic water mix” recipe. The resulting recipe of chlorine gas, however, triggers coughing, eye and nose irritation, and breathing difficulties. If consumed, it can be fatal. The lesson is that companies must include rules and safeguards within their AI model to protect consumers who may not know better. A simple disclaimer must go further.
Vanderbilt University Apologizes for using ChatGPT to Write Mass-Shooting Condolence
Rather than rely on internal staff to publicly address a mass shooting at another university in Michigan, Vanderbilt University Peabody School decided to rely on AI to create and send a message via a mass email. The communication was factually incorrect, and the message noted that the author had prepared the content using OpenAI's ChatGPT. Vanderbilt University's decision and approach were insensitive because the event involved a human tragedy. Its action put the university in a bad light, calling into question its decision-making and empathy.
Using AI to Do Bad - UnitedHealthcare Deploys Faulty AI to Profit
In an ongoing class action lawsuit, health insurer UnitedHealthcare, knowingly uses an AI algorithm to wrongfully deny elderly patients healthcare owed to them under Medicare Advantage Plans. The insurer systematically overrides physicians’ recommendations to deny elderly patients extended critical care facilities needed. UnitedHealthcare relies on its faulty AI model despite knowing its 90% error rate. The financial scheme enables UnitedHealthcare to collect premiums without paying for the critical healthcare that seniors need. To date, UnitedHealthcare still employs its faulty AI to maximize its profits at the expense of elderly patients.
Navigating the Future of AI: Lessons Learned and the Path Forward
The rise of powerful AI tools has brought excitement and concern. While AI tools will revolutionize most industries and make our lives easier, they also come with significant risks when misused or applied without proper safeguards. The examples we've explored – from lawyers submitting fake cases to deepfakes – highlight the importance of fact-checking AI outputs, implementing robust safeguards, and being transparent about the use of AI in decision-making processes. These cautionary tales teach important lessons. We must commit to AI's responsible development and deployment to harness its power for good while reducing the risks of AI misuse and unintended consequences.
The future of AI is bright, but it's up to all of us to ensure that it's a future we can trust. Let's learn from the mistakes of the past and work together to build an AI-powered world that benefits everyone.