Tech Brew // Morning Brew // Update
Plus, people are sabotaging their companies’ AI efforts

Stars—they love a YouTube rabbit hole, just like us. The hottest duo at Coachella this weekend wasn’t Beyoncé and Jay-Z, but Justin Bieber’s MacBook and his YouTube Premium subscription. Clad in a hoodie and shorts, the pop star used his laptop to dig up ancient internet clips—like a loop of himself slamming into a glass door in 2010—all of it blessedly ad-free, with Bieber providing live commentary. Other highlights from his 30-minute set within a set: He took song requests from a live chat, sang along with his 14-year-old self performing “Baby,” and griped when the wi-fi got spotty (relatable).

Turns out the most talked-about Coachella performance has a production budget of just over $1,000: $15.99/month for the premium video subscription and a grand for the mid-range laptop. (AppleCare sold separately.)

Also in today's newsletter:

  • Stop letting your photos accidentally reveal your location.
  • Office workers stay one step ahead.
  • OpenAI’s latest internal memo accuses Anthropic of juicing its numbers.

—Whizy Kim and Alex Carr

THE DOWNLOAD

Tim Cook

Nic Coury/Getty Images

TL;DR: Apple will develop its first smart glasses, set to launch in 2027, and new details suggest a deliberate play to out-execute Meta rather than out-innovate it. And Apple may end up owning the category, despite arriving late, thanks to features like iPhone integration and the company’s design obsession.

What’s coming: Bloomberg first reported that Apple's glasses, internally called N50, are display-free, making them closer to Meta's Ray-Ban collab than anything resembling Apple’s Vision Pro headset (which is…for the best). Reminder: The Vision Pro launched in 2024 for $3,499, sold poorly, and gave some wearers literal black eyes.

New details reveal that the smart glasses will be built for everyday use: photos, video, music, calls, and a hands-free Siri. Apple is testing at least four frame styles (including one that apparently resembles Tim Cook’s pair), with color options like black, blue, and brown. The camera system will also have oval lenses that are vertically oriented, and the frames will be made of “luxurious” acetate instead of plastic. Apple is expected to officially unveil the project in late 2026, with retail availability in 2027.

The larger puzzle: Per Bloomberg, the glasses are one piece of Apple’s three-part AI wearables push, which also includes next-gen AirPods and a camera-equipped pendant. Every device in that lineup takes in the world around you and feeds that information directly into Siri and Apple Intelligence. And unlike Meta, which leaned on lens makers for its frames, Apple is going it alone on design.

Is Zuck freaking out?: He's probably not hitting the jiu-jitsu mat to take out his anger just yet. (Also worth noting: It’s not the first time these tech CEOs have gone head to head.) After all, Meta's Ray-Ban glasses have become an unexpected hit, selling out repeatedly. Meta has the runway, the partnerships, and a loyal user base.

But Apple has the iPhone. If Apple’s glasses are properly integrated with an updated Siri, they’ll become an extension of what’s already in a billion people’s pockets. And historically, seamless ecosystem integration is how Apple has won categories it didn't create. The Apple Watch wasn't the first smartwatch, and AirPods weren't the first wireless earbuds. But both became the default.

The catch: Apple’s success in the glasses category largely depends on a Siri that actually works—something Apple has been promising and underdelivering on for years.

Bottom line: Apple entering smart glasses isn't a surprise. The question is whether it’s diving into a category that Meta’s already won or one that's still up for grabs. But Apple’s track record proves it doesn’t need to get there first; it just needs to get there, so the smart money might be on Cook and Co. —AC

Sponsored By PayMore

A stylized image with the words life hack.

Smile. Your photos are tracking you

If you've ever sent a photo to a stranger buying your old couch off Marketplace, a photo of a leak to a plumber you found on Yelp, or a photo of your dog being cute to someone you met on a dating app…you may have also just told the recipient exactly where you are.

Every photo taken on an iPhone or Android comes loaded with metadata: hidden information baked into the file, including the precise coordinates of where it was snapped. Search any city you've visited in your Photos app and watch every picture from that trip instantly surface. Great for finding that blurry photo you took of the Colosseum after too many limoncellos during your study abroad. Less great when you're accidentally broadcasting your home address.

Surprisingly, social media isn’t the main risk here. Instagram, TikTok, X, and Facebook all remove your location data from the public-facing version of your photo, so people can’t pull your coordinates from a post. (Though, it’s worth noting that the platforms do have your original file.) But email is the real culprit: Attach a photo, and the full metadata travels with it, untouched.

The good news: Apple Photos lets you remove your location data directly from the photo. Open the app. Click on any photo you’ve taken, and hit Info in the middle of the navigation bar on the bottom. Then you’ll see a map where the photo was taken. Hit Adjust and then click No Location. Google Photos doesn't offer this for locations automatically added by your camera, so your workaround there is a third-party tool from the Android store or on web, which can scrub metadata before you share. Or, if you want the laziest possible fix (no judgment): Take a screenshot of the photo before sending it. A screenshot is just pixels, no metadata.

Finally, there’s always the nuclear option: Go into your phone's settings and revoke your camera's location access entirely. Say goodbye to your searchable photo map, but if privacy is the priority, it's the cleanest fix. —AC

THE ZEITBYTE

Woman looking mischievous in office at computer

Getty Images

Here’s how to spot an AI saboteur at work: They skip AI training and tell you they’ll circle back next quarter. They generate low-quality outputs on purpose, then shrug and declare that the tech is just not there yet. In a recent survey of 2,400 knowledge workers in the US and Europe, nearly one-third of employees admitted to undermining their company’s AI rollout in some way. The tactics vary—some, for example, intentionally feed public chatbots their company’s proprietary info—but the motivating force is FOBO (that’s “fear of becoming obsolete,” of course).

The reasons workers cite for their anti-bot civil disobedience are a mix of existential dread and valid frustrations. Not just fear that AI will take their job, but that it devalues their creativity, that their company’s AI strategy is poorly executed, and that the tools pushed onto them are bad. To no one’s surprise, the most rebellious cohort was Gen Z, 44% of whom copped to sabotaging their firm’s AI strategy.

Also according to the report, about three-quarters of the C-suite say employee sabotage poses a serious threat to their company’s future. At the same time, roughly the same share of execs confessed that their AI strategy is more for show—to impress investors and generate LinkedIn content—than a real internal playbook. In other words, it’s hard to blame the arsonists when the house was already on fire. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

*A message from our sponsor.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=073f0919

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011