Plus: Europe’s largest AI acquisition. |
Welcome back to The Prompt. Three years ago, Zach Xia got married spontaneously in Hawaii. The ceremony was amazing. The photos, less so — some weren’t the right angle, others didn’t have the best lighting or showed his eyes closed. Now, his startup can fix that. Last year Xia teamed up with his former Adobe colleague Cecilia Zhang to start Phota Labs, which has developed an app that lets people upload their ordinary snapshots and turn them into professional-looking photos with generative AI. AI-based photography is a crowded market, full of startups and tech giants alike–including the duo’s former employer, Adobe, and Google, which recently launched a popular AI image editing tool called Nano Banana. So the duo decided to focus on solving the biggest challenge with AI-generated photos— that the person in them rarely looks anything like you. Phota Labs addresses this both by fine tuning its image generation models on third party AI models and by asking users to upload a set of images of themselves in different settings and scenes so that it can better grasp a person’s likeness, something that wouldn’t be possible with public training data. The two intend for the app to be used to retouch, uplift and “fix” regular images rather than create hyperrealistic uncanny valley-like images from scratch, CEO Zhang tolf Forbes. Use cases are already evident: LinkedIn portraits, restaurant images or real estate listings, she added. The startup has raised $5.6 million in seed funding from Andreessen Horowitz at an undisclosed valuation. But ethical concerns around AI-generated imagery linger as the line between the real and fake continues to blur. For instance, these images could be used to deceive people or spin up nonconsensual deepfakes of individuals. Plus, the industry is still in the nascent stages of developing techniques like watermarking to clearly identify and label AI-generated images. (Phota Labs is experimenting with a few, too.) Xia said he wouldn’t want Instagram to label images as “AI-generated” if they resemble the photos his software is retouching. And Zhang added “the interpretation that any image that is AI-generated is fake is too simple” and what qualifies as a real image will soon change too. “At the end of the day it’s what the photo means to us,” she said. Let’s get into the headlines. |
|
Microsoft is struggling with its marquee Office 365 Copilot, bogged down with technical challenges like its AI models struggling to interpret Excel files correctly and slowdowns in rolling out AI features. That has led to some customers passing on buying the software because they say the ROI doesn’t justify the price, The Information reported. The tech giant has also nixed previously announced features. In light of Copilot’s recent difficulties, the company made changes in its leadership and is offering discounts for the product, per the report. Plus: More than 200 contractors who worked on labelling and evaluating Google’s AI products were fired last month, Wired reported. The layoffs come after some workers, who were being outsourced from a company called GlobalLogic, voiced concerns around substandard wages and working conditions, and started to discuss ways to organize into a union. Things escalated quickly— and in February workers were banned from using the company’s social channels. For years, AI data annotators have taken issue with low wages and poor working conditions. |
|
In recent weeks, a string of lawsuits have been filed against AI companies like OpenAI and Character AI, alleging that their conversational chatbots are dangerous for teenagers, especially those in mental distress. The suits have been filed by the families of teenagers who spent hours confiding in these AI companions before committing suicide. The family of a 16-year-old boy who alleges ChatGPT coached their son to suicide testified on a Senate Judiciary hearing today. “ChatGPT radically shifted his behavior,” Matt Raine, the boy’s father said. Last week, the Federal Trade Commission launched an investigation into Alphabet, Meta, xAI, Character AI, OpenAI and other firms to assess how they make their systems safe for children. Plus: Disney, NBCUniversal and Warner Bros Discovery sued Chinese AI startup MiniMax AI, alleging that its “bootlegging business model” uses thousands of iconic copyrighted characters like Shrek, Spider-Man and Bugs Bunny to train and advertise its generative AI software and that it offers them to its users for free. |
|
Oracle cofounder and CTO Larry Ellison had a good week. The 81-year-old tech executive added more than $100 billion to his net worth on September 10, firmly establishing him as the second richest man in the world after Elon Musk, thanks to a 41% increase in the price of shares of his cloud infrastructure company. Oracle has reportedly signed a $300 billion deal with ChatGPT maker OpenAI to provide compute, the Wall Street Journal reported. In its latest quarterly earnings, the company announced that it expects its revenue to balloon 77% this fiscal year to $18 billion. |
|
HR software giant Workday acquired Swedish AI startup Sana for $1.1 billion, the biggest known acquisition of an AI startup in Europe. Founded in 2016 by Forbes 30 Under 30 alum Joel Hellermark, Sana develops agents for specific tasks like drafting emails, taking notes and filling forms. Under the new deal, Workday will make Sana’s AI systems available to its 70 million users and 11,000 customers. “AI today is a distribution game,” Hellermark told Forbes, adding that Workday was previously among Sana’s customers. Also notable: Data labelling startup Invisible Technologies has raised $100 million in a funding round led by Vanara Capital at a valuation of more than $1 billion, the company told Forbes. |
|
 | Stefan Wermuth/Bloomberg Stefan Wermuth/Bloomberg |
|
| | | |
|
|
In June 2022, San Mateo County Sheriff’s Office organized a “day of fun” down at the Coyote Point Shooting Range. The department’s IT team and staff from C3 AI, a $2.4 billion market cap company founded by billionaire Tom Siebel, were invited to celebrate their work on a new surveillance system intended to unify police data from agencies across the county into a single app. After lunch, C3 AI staff could practice shooting targets at the same facility where the cops trained. The choice of venue was not accidental: In 2018, Siebel’s foundation gave $4.5 million to the county to fund the range’s refurbishment. Working closely with C3 AI and its longtime partner, Amazon Web Services, the Sheriff’s Office wanted to connect masses of data from at least 15 different agencies under its jurisdiction, collected from devices like license plate readers, surveillance cameras, arrest records, 911 calls and historical police databases. At the core of Project Sherlock, named for Arthur Conan Doyle’s fictional detective, was artificial intelligence that could piece the data together, find leads for investigators and give situational awareness to cops on the beat. The ultimate aim was to dramatically speed up the grunt work of policing. For C3 AI, Sherlock was set to be a flagship contract for the company’s local government division. And so crucial was it that Siebel himself was deeply involved in the early stages of the work, down to writing sections of the contract and hosting San Mateo cops at his offices in Redwood City. But $12 million and three years later, Project Sherlock has struggled with usability issues, feature oversights and significant delays, in some cases well over a year, according to public records obtained by Forbes. A timeline of the planned Sherlock rollout, obtained via public records, showed that 14 agencies were due to go live with Sherlock by January this year at the latest. Seven of those agencies — Atherton, Belmont, East Palo Alto, Foster City, Menlo Park, San Bruno and South San Francisco — told Forbes they didn't have access to a fully functional product and therefore hadn't seen any investigational benefit. Read the full story on Forbes. |
|
|
Bizarre AI-generated anti-Trump videos have racked up more than 2.2 billion views on YouTube, becoming one of the single most watched political content on the social media platform in 2025, Forbes found. Many of the short form videos are skits featuring cartoonish depictions of Putin, Khamenei and Kim Jong Un posing as tricksters that prank, imprison or kill Trump and Netanyahu. |
|
|