Welcome to Runtime! Today on Product Saturday: OpenAI makes it easier to use ChatGPT with office productivity tools, Broadcom previews new networking chips that take aim at Nvidia, and the quote of the week. (Was this email forwarded to you? Sign up here to get Runtime each week.)
Ship itConnect the dots: If generative AI is going to make the impact in the enterprise that the investors who have funneled nearly $60 billion into OpenAI over the last several years believe it will, current and potential customers will need more help connecting data from across their companies. OpenAI introduced new tools this week designed to do exactly that, forging new links between several popular storage services and ChatGPT. The new "connectors" will allow ChatGPT Enterprise users "to access company data stored in Dropbox, Box, SharePoint, OneDrive, and Google Drive directly through ChatGPT, eliminating the need to switch between applications," according to VentureBeat. That sounds like it would create yet another awkward situation between OpenAI and Microsoft, but SaaS companies have been building integrations for competitive software products for a very long time. Speaking of connections: At a deeper level of the enterprise, companies use data pipelines to shuttle corporate data back and forth between storage and applications. Fivetran grew quickly over the last several years thanks to an array of pre-built connectors that customers can use to establish those pipelines between data tools and storage, but it's always the edge cases that cause the most problems. This week Fivetran introduced a new Connector SDK that allows companies to "build reliable, secure pipelines for virtually any application, internal API, or legacy system" without having to manage a lot of custom infrastructure on their own, it said in a press release. As Benn Stancil noted, Snowflake's new Openflow service appears to take direct aim at Fivetran's business when it comes to getting data into Snowflake, but giving customers new flexibility could help it play defense. Hatchet job: Nvidia doesn't just own the market for AI GPUs; it also has been the primary supplier of networking chips for AI data centers for a long time. Broadcom has aspirations to take more of that business for itself as Ethernet chips start to rival the performance of Nvidia's Infiniband technology, and this week it introduced the Tomahawk 6 chip in hopes of doing just that. Tomahawk 6 features "the world’s first 102.4 Terabits/sec of switching capacity in a single chip – double the bandwidth of any Ethernet switch currently available on the market," Broadcom said in a press release. A lot of data-center operators would prefer to use Ethernet for networking their AI servers given that it's an open standard and more versatile, but the performance of Infiniband when it comes to AI applications has just been too good to pass up for several years. Gateway drug: The competition among enterprise software companies to establish themselves as the central management layer of their customers' agentic AI strategies is well underway, even if those customers still aren't entirely sure what they want and need in an AI agent. This week Workday made another bid to play that role. The Agent Gateway is a new specification for how developers should build their agents to connect to Workday, and the company also launched the AI Agent Partner Network with several significant enterprise software companies and consulting partners. Nobody really knows how customers will decide to manage AI agents, but Workday is leaning into the idea that AI agents are just automated employees, and therefore its people-management software is a logical place to coordinate that activity. What's French for "vibe": Mistral flies a bit more under the radar compared to its U.S.-based frontier-model competitors because Silicon Valley is an insular place, but it has raised $1.25 billion in funding and more than holds its own against deep-pocketed rivals. This week it jumped on the "vibe coding" bandwagon with the launch of Mistral Code, a new AI-driven coding assistant that will work with editors like Visual Studio Code and Jetbrains. "Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring—through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped on-prem GPUs," the company said in a blog post. Mistral emphasized that its assistant comes with a single service-level agreement, as compared to competitors that use SLAs "spread across one vendor for the plug-in, another for the model, and a third for infra," it said.
Stat of the weekThere have never been so many tools available to manage the complex task of securely running an enterprise on the cloud, and maybe that's a bad thing. According to Check Point's 2025 Cloud Security Report released this week, 71% of survey respondents are using ten or more security tools, while 16% are somehow using more than 50, and the number of alerts produced by those tools is becoming impossible to manage.
Quote of the week"We are very generally committed to meeting our customers where they are, giving them choice without overwhelming them with options. This is something that I think the cloud providers have overdone. We're very careful, very intentional, very opinionated on when we introduce an option because it's worthwhile and it's easier for our customers." — Snowflake executive vice president Christian Kleinerman, in a press conference Monday at Snowflake Summit, describing the company's product-development strategy.
The Runtime roundupCisco urged customers to patch a serious vulnerability in its Identity Services Engine that could expose customers running that software on cloud providers to a potential breach. SAP will not prevent Celonis customers from accessing their SAP data in exchange for Celonis retracting a demand for a preliminary injunction, the companies announced, although the broader antitrust dispute remains active.
Thanks for reading — see you Tuesday!
|