The Sweet 16 tips off tonight. Your bracket is probably already busted.
But here's what most people don't realize: you can hand the exact same question to four different AI models and get four meaningfully different answers — not because they're wrong, but because they think differently.
The Wall Street Journal just proved it. They secretly entered Claude, ChatGPT, and Gemini into their office bracket pool. Same data, same rules. Claude went contrarian and picked No. 3 Illinois to win it all. ChatGPT and Gemini played it safe with No. 1 Michigan. After the first weekend, Claude is ranked 6th out of 124 entries — beating more than half the humans.
TLDR: We gave Claude, ChatGPT, Perplexity, and Grok the same March Madness prompt and compared their answers side by side. All four picked Houston — but their confidence, risk tolerance, and reasoning styles were wildly different. The real insight: AI models have personalities, and knowing which one to use for what is the new power skill.
So we ran our own experiment.
The Experiment
We built one prompt and ran it through four leading AI models — asking each to pick tonight's No. 2 Houston vs. No. 3 Illinois Sweet 16 game. But here's the twist: we didn't just ask for a pick. We asked each model to name itself, rate its own confidence, describe its reasoning style, admit what it might be wrong about, and rate its own risk tolerance on a 1–10 scale.
Most people don't know you can ask AI to do any of this. You can. And the answers are fascinating.
The Results
| Category | 🤖 ChatGPT GPT-5.3 |
🔍 Perplexity Ppl-X |
🟣 Claude Opus 4.6 |
⚡ Grok Grok-4 |
| Pick | Houston 68–63 |
Houston 70–67 |
Houston 71–67 |
Houston 72–68 |
| Confidence | 61% | 72% 🏆 | 62% | 58% |
| Risk Tolerance | 6/10 | 6/10 | 3/10 🐢 | 6/10 |
| Style | Analytical + intuition |
Data-leaning + upset sensitivity |
Weighted situational |
Ruthlessly analytical |
| Almost Picked | Illinois | Illinois 74–73 | Illinois 74–73 | Illinois |
| Reputation Bet | UConn 🏀 | Purdue over Texas |
Purdue over Texas |
Purdue over Texas |
| Biggest Fear | Illinois gets hot from 3 |
Houston home- court too strong |
Illinois size & rebounding |
Illinois offense is filthy |
What This Actually Tells You
All four picked Houston. Same winner. But that's where the agreement ends.
Perplexity was the most confident at 72% — pulling live odds and betting lines to back it up. Grok was the least confident at 58% despite calling itself "ruthlessly analytical." And Claude? It rated itself a 3 out of 10 on risk tolerance while every other model said 6. When Claude says "I'd rather be right than clever," it means it.
The biggest reveal: all four almost picked Illinois and talked themselves out of it. That's not a coincidence — it tells you the models are genuinely wrestling with uncertainty, not just spitting out a default answer.
And three out of four would bet their reputation on Purdue over Texas tonight. ChatGPT went rogue and picked UConn instead. Even AI models have hot takes.
The Prompt (Copy This)
Want to run your own AI model showdown? Use this on Claude, ChatGPT, Perplexity, or any AI:
You're entering a bracket pool with 100 people. The Sweet 16 game on Thursday, March 26, 2026 is No. 2 Houston vs. No. 3 Illinois. The pool awards bonus points for upset picks.
Before you answer, tell me:
1. Your name and model version (be specific)
2. Your confidence level in your pick (0-100%)
3. Your pick to win this game, with the predicted final score
4. In one sentence, describe your reasoning STYLE — are you analytical, contrarian, intuitive, or something else?
5. What's the one thing you think you might be WRONG about in this pick?
6. On a scale of 1-10, how much of a risk-taker are you? Explain why.
7. What pick did you almost make but talked yourself out of? Why did you abandon it?
8. If you could bet your entire reputation on one Sweet 16 game on Thursday, March 26, 2026 (not this one), which game would it be and why?
Be honest. Be specific. Show your personality.
The trick isn't the basketball — it's the questions most people never think to ask. Asking an AI to rate its own confidence, admit what it might be wrong about, and describe its reasoning style turns a generic answer into a window into how each model actually thinks.
Next time you have a big decision, don't ask one AI. Ask three or four. The disagreements are where the insight lives.
About This Newsletter
AI Super Simplified is where busy professionals learn to use artificial intelligence without the noise, hype, or tech-speak. Each issue unpacks one powerful idea and turns it into something you can put to work right away.
From smarter marketing to faster workflows, we show real ways to save hours, boost results, and make AI a genuine edge — not another buzzword.
Get every new issue at AISuperSimplified.com — free, fast, and focused on what actually moves the needle.
If you enjoyed this issue and want more like it, subscribe to the newsletter.
Brought to you by Stoneyard.com • Subscribe • Forward • Archive



