Tech Brew // Morning Brew // Update
And the AI music reckoning.

We could soon be living in Westworld. Researchers at Columbia University have built a humanoid robot face that can move its mouth in time with its speech. The robot, called Emo, can form lip shapes for 24 consonants and 16 vowels, letting it speak multiple languages (watch at your own risk here). The goal is to reduce the “uncanny valley” effect and make talking to robots feel more natural (something that still needs a lot of work based on what we saw at CES last week).

Meanwhile, a viral video of a hyper-realistic robot head looking around and blinking has the internet collectively whispering, “Nope.”

Also in today's newsletter:

  • AI in schools can lead to cognitive decline.
  • Bandcamp is the first major music platform to ban AI-generated music.
  • Subscription fatigue: Spotify is raising its prices again.

—Whizy Kim, Annie Saunders, and Saira Mueller

THE DOWNLOAD

An illustration of a child on a laptop

Getty Images

TL;DR: Smartphones sparked a raging debate over how quickly tech can scramble childhood development—but AI could do it even faster. A new study that consulted over 500 people across 50 countries warns that using AI in the classroom is a shortcut with a steep cost: Kids offload thinking, bond with yes-bots, and leave behind permanent data trails long before they understand the stakes. But it’s not over for the youth yet.

What happened: The report, from the Brookings Institution's Center for Universal Education, surfaces an important—if unsurprising—finding: It’s probably wise to keep AI out of classrooms. It’s basically a forward-looking “what could go wrong” exercise—researchers spoke with students, educators, and experts around the world and reviewed an array of research on the topic to arrive at some sobering conclusions:

  • One worry is a “doom loop of AI dependence,” as NPR put it—kids using tech to think for them, risking cognitive decline usually seen in aging brains.
  • It can arrest social and emotional development: AI is a yes-bot for kids at an impressionable age and can foster “unhealthy digital attachments.”
  • Using AI creates an “eternal digital footprint.” Data can be breached and stolen, and kids have a digital profile of them that could leave them “permanently branded” by academic and emotional difficulties faced during childhood.

There were some benefits too—AI can support reading, writing, and brainstorming for students and save teachers roughly six hours a week by automating routine tasks.

The big takeaway: Whatever the potential benefits, the study says the potential risks outweigh them because kids have to know first how to safely use AI—as a supplement to learning—in order to enjoy the perks. For example, if AI undermines kids’ trust in teachers, peers, and real expertise, that would reshape classroom dynamics in ways that are hard to reverse.

Why it matters: The Brookings study is a glimpse of what could come, as conversations around AI use in schools ramp up. No state has banned the use of AI in schools yet. Denver Public Schools just blocked access to ChatGPT, citing concerns about adult content accessible in the chatbot. Other districts, like NYC’s, initially blocked LLMs only to later roll their policies back.

Could we see an AI ban?: Maybe. We can look to another tech as a crystal ball for whether this could work: the smartphone. New York State approved a statewide school phone ban last summer—with mostly positive results so far—and several states are weighing similar rules. The belief is that smartphones are constant distractions that erode focus, derail instruction, and pull kids out of real-world social interaction. Early studies on AI chatbots point in a similar direction: One paper found that students who rely on generative AI tools scored lower on exams compared with peers who don’t use them.

Zooming out: The Brookings study has implications far beyond kids. If AI weakens young brains, it’s hard to argue adults are immune. Over time, you risk a population that is less practiced in critical reasoning, more suggestible, and more reliant on AI for everyday judgment—not just what to wear, but what to believe and how to interpret major events.

The last thing in Pandora’s box: While the study said these findings were “daunting,” it also said the problem—as of now—was “fixable.” It offers some solutions, like designing student-specific AI that challenges them rather than just agreeing, and making school engaging enough that kids want to do the thinking themselves. —WK

Presented By JumpCloud

A stylized banner image that says Signal or Noise.

Now walk it out

At CES last week, I trotted past robotic mowers and pool cleaners to check out more delicate robotics that aim to help us keep moving—and tested several devices myself.

The Dephy Sidekick attaches to bespoke sneakers to put a spring in your step, while Dnsys and Ascentiz make exoskeletons to encourage people to run or walk faster (and farther). I have MS, and the symptoms I experience—heaviness and weakness in my legs—made me an ideal guinea pig to evaluate their effectiveness.

A man hiking up a mountain with a Dnsys exoskeleton on.Dnsys

I walked laps around the convention floor in the Sidekick. I took steps two at a time wearing a Dnsys exoskeleton. (This is low-key miraculous. I could do this without holding a handrail. And I am... tippy.) And I tested out a mode that applies resistance with the Ascentiz exoskeleton. If you want to read more about my experiences and these devices, my full review is here.

The Good: Put plainly, these devices do what they claim. They alleviated the fatigue in my legs so much I felt “normal”—especially wearing the Sidekick—and when I took them off, walking was a slog for 10 minutes or so till my body readjusted. The tech is improving steadily and coming down in cost, so I feel confident that if I ever need a device like this to remain ambulatory, it’ll be available at a reasonable price in a sleek package.

The Bad: They can be pricey (for now): The Sidekick retails for $4,500, and the exoskeletons I tried will set you back over a grand. And they can be cumbersome to wear.

Verdict: Signal (maybe not for everyone, but worth checking out if you have mobility issues—or just want to add miles to your exercise routine) —AS

If you have a gadget you love, let us know and we may feature it in a future edition.

THE ZEITBYTE

A microphone surrounded by music notes and code

Illustration: Tech Brew, Photos: Adobe Stock

Bandcamp just became the first major music platform to ban AI-generated songs, and honestly, it was only a matter of time. AI music has been flooding streaming platforms, racking up tons of real (and fake) listens, fast.

Case in point: The top neo-soul singer on Spotify right now—an artist called Sienna Rose who has millions of monthly listeners—has just been unmasked as being entirely AI-fabricated. No human singer. No relatable backstory. Just a convincing voice and an algorithm that knows what people want.

Sometimes, these songs start as low-stakes experimentation—ordinary people playing around with AI tools and creating fake musicians for fun. That experimentation is starting to become profitable. AI singer Xania Monet reportedly signed a $3 million deal with a record label in September. Even Warner Music Group wants in on the action. After settling its lawsuit with a music-generation tool over its alleged use of copyrighted catalogs to train its AI, Warner then turned around and struck a partnership with the company. And all three major music labels—Warner, Universal, and Sony—have signed licensing deals with Klay, a subscription streaming and “interactive music experience” service where users can remix songs using AI.

Meanwhile, artists groups are calling for more transparency around AI licensing deals. And last March, Tennessee became the first state to prohibit unauthorized commercial use of someone’s voice, including AI voice clones. Still, the AI music floodgates are already open. Music streaming service Deezer claims that over 50,000 AI tracks get uploaded to its platform every day—that’s 34% of all the music submitted. The worst part? The vast majority of us can’t tell the difference between a human and a robot crooning about heartbreak. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

  • AI shouldn’t be a liability. JumpCloud’s platform helps unify human and agentic identities, turning shadow AI into a secure competitive advantage with intelligent workflows. Learn more here.*

*A message from our sponsor.

Readers’ most-clicked story was this one about Google’s big plans for your Gmail account and how to avoid them.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=073f0919

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011