|
Sponsored · AdCreative.ai The AI That Scores Your Ads Before You Spend a Dollar1 billion ad creatives generated. Used by Snap, Philips, Reckitt, and 4M+ marketers. Generate banners, product shoots, and video ads in seconds — and get a conversion score on every one before you launch.
Mother's Day promo: 6 months free on yearly plans. Ends in days. |
A Nebraska lawyer filed an appellate brief with 63 case citations.
57 of them had problems. 20 were full hallucinations — cases that exist in no jurisdiction.
ChatGPT invented them. He didn't check. The Nebraska Supreme Court suspended him indefinitely. His client is on the hook for $52,000.
You'd think this was a solo practitioner cutting corners. It wasn't. The same pattern just hit Sullivan & Cromwell — the Wall Street firm that literally advises OpenAI on the safe and ethical deployment of AI. They filed a federal brief with 40+ AI-generated errors. The firm OpenAI hires to keep its AI in line couldn't catch its own AI hallucinations.
If it's happening at S&C, it's happening in your reports, your slide decks, and your client emails. The consequences won't always be a license. Sometimes it's your job. Or your reputation. Or a $52,000 invoice with your name on it.
Blu Dot surpasses 2,000% ROAS with self-serve CTV ads
Home furniture brand Blu Dot blew up on CTV with help from Roku Ads Manager. Here’s how:
After a test campaign reached 211,000 households and achieved 1,010% ROAS, the brand went all in to promote its annual sales event. It removed age and income constraints to expand reach and shifted budget to custom audiences and retargeting, where intent was strongest.
The results speak for themselves. As Blu Dot increased their investment by 10x, ROAS jumped to 2,308% and more page-view conversions surpassed 50,000.
“For CTV campaigns, Roku has been a top performer,” said Claire Folkestad, Paid Media Strategist, Blu Dot. “Comping to our other platforms, we have seen really strong ROAS… and highly efficient CPMs, lower than any other CTV partner we've worked with.”
Using Roku Ads Manager, the campaign moved from a pilot to a permanent performance engine for the brand.
TLDR: A Nebraska lawyer just got suspended for filing a brief with 57 defective AI-generated citations. Sullivan & Cromwell — the firm that advises OpenAI on safe AI deployment — just got caught doing the same thing in federal bankruptcy court. Today: the 5-point fact-check protocol that catches every type of AI hallucination before it ships, plus a proof table showing it run on four different professional documents.
Quick Recap — This Week On AISS
The 30-second catchup if you missed any:
Mon — Why AI Just Stepped Off The Screen — A Tokyo robot beat pro athletes at chess, Go, and StarCraft simultaneously
Tue — Why AI Beat ER Doctors 67% to 55% — OpenAI's cheapest reasoning model out-triaged board-certified physicians
Wed — Why Apple Just Praised a Rival's AI — Mac minis sold out worldwide because of one $20 tool
Thu — Wall St Just Spent $5.5B to Skip McKinsey — Anthropic and OpenAI both raised consulting-killer war chests
A week of AI winning. Today: AI losing — and taking careers down with it.
What Actually Happened
Omaha attorney Greg Lake represented a client in a Nebraska divorce appeal. He used ChatGPT to draft his brief — including the legal research. The AI gave him 63 case citations: proper case names, court names, dates, and reporter numbers.
He filed it. Opposing counsel tried to look up the cases. Most didn't exist as cited.
Of the 63 citations, 57 were defective. 20 were full hallucinations — entirely invented cases that exist in no jurisdiction. 4 were completely fabricated decisions. The rest mostly cited real cases for holdings those cases never made.
His explanation made it worse. Lake initially blamed a broken laptop and a wedding-anniversary trip. He repeatedly denied using AI. Months later he reversed course and admitted he had — calling it "a grave error of judgment." The Nebraska Supreme Court was unimpressed. Indefinite suspension. His client got hit with $52,000 in opposing counsel's fees and sanctions.
This isn't an isolated case.
Why "We Have AI Governance" Isn't Enough
You'd expect this from solo practitioners. The shock is who else just got caught.
|
1,300+
CASES SINCE 2023
Sanctioned AI-hallucination court filings tracked by legal academic Damien Charlotin at HEC Paris. Roughly 800 from US courts. The pace has reached "ten cases from ten different courts on a single day."
|
|
THE TWIST · WHO ELSE GOT CAUGHT
|
|
Sullivan & Cromwell — one of the most prestigious firms in America, and the law firm OpenAI hires to advise on the "safe and ethical deployment" of AI — just filed a federal bankruptcy brief with 40+ AI errors, including citations to cases that don't exist.
Their internal AI safeguards weren't followed. Opposing counsel caught it, not S&C's own review. Even the firm OpenAI pays to keep its AI in line couldn't catch its own AI hallucinations.
|
The pattern is now everywhere:
A Stanford expert witness's AI-generated declaration was thrown out in federal court for fabricated sources
Multiple Am Law 100 firms have filed sanctioned briefs with hallucinated citations — the largest single sanction is now $109,700
Federal judges in Texas, California, and New York are requiring "AI use disclosures" on every filing
The lesson isn't "don't use AI." Every one of these professionals will keep using AI — they have to. The productivity gains are real and the competition is using it too. The lesson is that using AI without a verification protocol is now a career-ending shortcut.
If S&C with its in-house safeguards can't catch this, your team — running on Slack reminders and good intentions — definitely can't.
The Free Newsletter That Tech Geniuses Love
Looking for a free tech newsletter trusted by the industry’s biggest names? Subscribe to The Current, a free daily tech newsletter written by Kim Komando to help you understand AI, keep up with tech news, and learn useful tips in just 5 minutes a day.
The Prompt: AI Fact-Check Verifier
This is the protocol to run on every AI output before it leaves your computer. Open a fresh chat (not the one that produced the content), paste the AI-generated draft, and let it run a 5-point check. You get a GREEN/YELLOW/RED report on every claim, plus a top-line ship/fix/kill verdict.
The "fresh chat" part matters. Running the verification in the same session that produced the content lets the AI rationalize its own hallucinations. Fresh context, no mercy.
You are an AI Fact-Check Verifier. I'm going to paste a piece of
AI-generated content. Your job is to flag every claim that could
be a hallucination before it ships.
First, ask me three things, one at a time, and wait for each answer:
1. What's the document type? (legal brief, business report, marketing
copy, blog post, email, internal memo, other)
2. What's the consequence of a wrong fact getting through?
(career-ending, brand-damaging, mildly embarrassing, low-stakes)
3. Who's the audience? (client, regulator, public, internal team)
Once I've answered, say: "Ready. Paste the content." Then wait.
When I paste, run this 5-point verification protocol:
1. NAMED ENTITIES — Every person, company, court, agency, institution
mentioned. Flag any you cannot verify exists. Mark each GREEN
(verified), YELLOW (likely real, hard to confirm), or RED
(no evidence of existence).
2. STATISTICS — Every number, percentage, dollar figure, date.
Flag any without a clearly attributable source. Demand citations.
3. QUOTES — Every direct quotation. Flag any that you cannot trace
to a documented source. Quotes without sources are RED by default.
4. LOGICAL CLAIMS — Every causal or comparative claim ("X causes Y,"
"X is bigger than Y"). Test each against your training data.
Flag the ones that depend on facts you can't independently verify.
5. REGULATORY/LEGAL REFERENCES — Every law, case, regulation,
standard, or precedent cited. These are the highest-risk
hallucinations. Flag any you cannot confirm by name and source.
Output format: For each item give me:
- The exact text of the claim
- Verification status (GREEN / YELLOW / RED)
- What I need to do to verify it manually if it's not GREEN
- A confidence score 1-10
End with a top-line verdict: SHIP IT, FIX THE YELLOWS, or
DO NOT FILE — and the single biggest risk if I ignore your report.
Thirty seconds of setup. Two minutes of review. The lawyer who got suspended would have had a RED on every one of his 20 fully hallucinated citations.
|
Proof Table — Same Prompt, Four Documents
| AI FACT-CHECK VERIFIER · 4 PROFILES TESTED | |||
| PROFILE | DOCUMENT | WHAT IT CAUGHT | VERDICT |
| Solo Lawyer | Appellate brief | 4 RED case citations 2 YELLOW dates |
DO NOT FILE |
| Marketing Manager | Pitch deck | 1 RED competitor stat 3 YELLOW market sizes |
FIX YELLOWS |
| Financial Analyst | Earnings summary | 0 RED 2 YELLOW analyst names |
SHIP IT |
| Software Engineer | Architecture doc | 1 RED API reference 0 YELLOW |
FIX YELLOWS |
| Same prompt. YOUR documents. Try it before your next big send. | |||
ChatGPT didn't lose his license.
Trusting it did.
About This Newsletter
AI Super Simplified is where busy professionals learn to use artificial intelligence without the noise, hype, or tech-speak. Each issue unpacks one powerful idea and turns it into something you can put to work right away.
From smarter marketing to faster workflows, we show real ways to save hours, boost results, and make AI a genuine edge — not another buzzword.
Get every new issue at AISuperSimplified.com — free, fast, and focused on what actually moves the needle.
If you enjoyed this issue and want more like it, subscribe to the newsletter.
Brought to you by Stoneyard.com • Subscribe • Forward • Archive




