Tech Brew // Morning Brew // Update
Plus, Google Chrome just made your prompts reusable.

Jesse Eisenberg is out. Jeremy Strong (bleached ginger hair and all) is in. The first trailer for The Social Reckoning debuted at CinemaCon this week. The movie is a follow-up to 2010's The Social Network—just swap Mark Zuckerberg's Harvard dorm room origin story for a Facebook whistleblower saying that Meta knew it was harming people and democracies. Writer-director Aaron Sorkin calls it a "David and Goliath story." (Goliath's PR team takes issue with the comparison.)

Reactions to Strong's Zuckerberg ranged from "an SNL impression" to potentially internet breaking. Strong, for his part, said he approached the role with "empathy and objectivity." His answer when asked if he'd reach out to Zuck: "No comment." It's nice to see an actor refuse to break character.

Also in today's newsletter:

  • Play fart sounds for ChatGPT. It'll praise you.
  • Humans are now pretending to be AI for fun.
  • A robodog that can spot a spill from across the room.

—Whizy Kim, Patrick Kulp, and Saira Mueller

THE DOWNLOAD

VIP velvet rope with AI symbol behind it

Morning Brew Design

TL;DR: Not to be outdone by rival Anthropic, OpenAI said yesterday that its latest cybersecurity model is too unsafe to release publicly—at least right away. It comes on the heels of Anthropic similarly restricting Mythos last week and signals more guarded rollouts could become the norm as frontier models become ever more powerful.

What happened: OpenAI will initially limit the release of GPT-5.4-Cyber, a version of its flagship model fine-tuned specifically for cybersecurity, to a vetted group of vendors and researchers. The more “cyber-permissive” model has fewer guardrails than the standard version, allowing users to better hunt for code vulnerabilities and malware potential.

OpenAI also said it will be expanding its Trusted Access for Cyber program, a group of verified security users, to “thousands” of individual defenders and “hundreds” of teams, with the highest tiers gaining access to the new model.

Footsteps followed: The move comes a week after Anthropic caused a stir with a restricted release of its Mythos model after it spooked company researchers by discovering thousands of vulnerabilities—some of them decades old—and escaping its test environment. Instead of a public release, Anthropic formed a coalition of Big Tech companies called Project Glasswing, which have been given preview access to the model for defense purposes.

Echoes of the past: When OpenAI first announced GPT-2 in early 2019, it made headlines with a similar claim that the model was too dangerous to immediately release to the public. At the time, the company fretted that a new generation of LLMs would spawn an avalanche of fake news, spam, and impersonation scams—fears that have definitely proved to be well-founded as we muck through AI slop.

Gated community: While OpenAI’s model is trained specifically for cybersecurity permissiveness, experts say it’s only a matter of time before other models match the unnerving abilities exhibited by Anthropic’s Mythos. That includes freely available open-weight models—ones anyone can download and run locally, without going through a company's API—which tend to catch up with the capabilities of their closed counterparts within months (the gap may be widening again, however). The cybersecurity threats have renewed an ongoing debate over the potential dangers of open-weight models.

Bottom line: Fears about the cybersecurity risks of powerful new models are pushing labs toward controlled rollouts. But if history is any guide, today's breakthrough capabilities tend to show up in freely downloadable open-weight models within months—roadblocks or not. —PK

Also at OpenAI…

Sponsored By Quill

THE ZEITBYTE

Illustration of green, stinky fart clouds with musical notes with 5 stars.

Morning Brew Design

It’s a great time to be alive if you love to be flattered. Ask AI who’s the fairest of them all? It’s you. Went on a rage spiral in the group chat because they didn’t immediately like your last Instagram post? They obviously deserved it. In fact, you can play literal fart sounds for ChatGPT, and it’ll wax melodic that you’re the next Mozart.

It's the latest round in the internet’s favorite sport: getting AI chatbots to embarrass themselves. (For previous entries, see: walking vs. driving to the car wash or asking it to find the nonexistent seahorse emoji.) ChatGPT told one user who submitted a 37-second recording of fart sounds that their song had a “late-night, slightly eerie vibe” but “didn’t overfill the space.” No comment on its potential olfactory impact. The same sound bite presented again by PC Gamer resulted in a harsher critique, but ChatGPT still informed them that the bursts of flatulence seemed “intentional—not random sounds thrown together.”

Sure, sometimes it’s nice to be told that you’re perfect and have never made a mistake in your life. But it also makes it hard to really trust anything the AI tells you. What’s 2+2? ChatGPT may tell you that it’s whatever you want it to be, Einstein. —WK

Chaos Brewing Meter: /5

Together With Got Print

A stylized image with the words open tabs.

*A message from our sponsor.

Readers’ most-clicked story was about the two new Google Chrome features that could make your life easier.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=073f0919

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011