|
Your weekly dose of Seriously Risky Business news is written by Tom Uren and edited by Patrick Gray. This week's edition is sponsored by Knocknoc. You can hear a podcast discussion of this newsletter by searching for "Risky Business News" in your podcatcher or subscribing via this RSS feed.
Listen here The Trump administration is grappling with whether to give US intelligence agencies a bigger role in the assessment of new AI models, according to The Washington Post. Ideas about AI regulation within the administration appear to be in a state of flux. Politico reported on Tuesday last week the administration was considering a government vetting process before new models were released. By Thursday, the administration was distancing itself from tighter regulation, and by Friday a lobbyist told Politico that "there is no clarity" because "different factions within the White House have different views about what should happen". Amongst that chaos, the National Cyber Director pitched a center within the Office of the Director of National Intelligence for the evaluation of new AI models. The intelligence community has deep expertise in cyber security and AI and their associated national security risks and benefits, so that does make a lot of sense. Meanwhile, the Department of Commerce is already home to the Center for AI Standards and Innovation (CAISI). The Center was spun up as the US AI Safety Institute in 2024 before being renamed last year, and it has a head start building the expertise and infrastructure in testing and evaluating models. But on Friday CAISI took down a web site that announced new voluntary agreements with Google, Microsoft and xAI for it to test models before their release because of what the Washington Post describes as "sensitivity" from the White House. The Post's sources went on to describe the conflict between Commerce and national security aides as a "knife fight". Of course, AI is about more than just cyber security and potentially has serious implications across the entire economy. There is no single government agency that contains all the expertise that is needed to review models. For example, AI companies are concerned about the biological and chemical capabilities of models and whether they could be used to assist in the creation of weapons of mass destruction. These are serious concerns but not exactly the expertise that NSA holds. Despite that, the intelligence community should obviously be involved in assessing models. Where the specialist expertise exists in that community, the US government should draw on it. But the intelligence community shouldn't own model assessment entirely. AI's implications are simply too wide-ranging. LEO Satellite Constellations Are a Must HaveRussian firm Bureau 1440 has begun launching the country's answer to Starlink, a new Low Earth Orbit (LEO) satellite constellation called Rassvet, reports Wired. Rassvet, Russian for 'daybreak', will not match Starlink in many respects, but the Russian Russian government is investing billions of dollars to develop it as a sovereign capability. Rassvet's total funding is USD$5.7 billion, with the Russian Ministry of Communications reportedly providing about $1.3 billion. By comparison, Amazon has spent more than USD$10 billion on its satellite constellation, Amazon Leo, which has not yet launched its commercial service. The planned size of Rassvet's satellite constellation is far smaller than either Leo or Starlink. Bureau 1440 is aiming for around 900 satellites by 2035, with 16 launched so far. By contrast, Amazon says it will launch three thousand by 2029 for Leo and Starlink currently has nine thousand active satellites. Using thousands of satellites to provide global coverage is great for providing fast internet broadband worldwide, but for Russia having a rudimentary sovereign, domestically focussed capability may be more important than having a great one. Even though Starlink was notionally denied to Russian forces because of geoblocks and sanctions, its military was able to use it throughout the war in Ukraine until SpaceX began allowlisting Ukrainian terminals earlier this year. The countermeasure was prompted by Russia using Starlink terminals to control long-range drone strikes deep within Ukrainian territory. This allowlisting control had an immediate impact on Russian effectiveness and left its military searching for ground-based alternatives such as long-range Wifi bridges. Even a relatively small constellation of LEO satellites could provide some military users in occupied Ukraine with high-bandwidth low-latency links, albeit intermittently throughout the day. Bureau 1440's already achieves 48 Mbit/s downlinks and 12 Mbit/s uplinks at around 40 milliseconds latency. Specialist space publication KeepTrack says this is roughly equivalent to early Starlink capacity before its constellation was built out, suggesting that the underlying communications technology is sound. Beyond potential military application in Ukraine, Rassvet is optimised for high-latitude coverage and will do a far better job than Starlink of servicing northern Russia. It seems that the biggest hurdle for Rassvet, however, is Russia's limited launch capacity, which is running at just 20 launches per year for all customers. China, by contrast, is launching two satellite constellations: Guowang, 'State Network', is a state-backed initiative and Qianfan or 'Thousand Sails' is a commercial-first system. Each constellation is planned to consist of almost 13,000 satellites and at the end of 2025 about 400 satellites were in orbit between the two systems. That's a lot of satellites still to go, but China isn't launch constrained in the way that Russia is. This year, it is on track for 140-odd launches and by March it had successfully launched 34 orbital missions. This compares to the US's 29, of which SpaceX alone was responsible for 22. To date, China has used single-use rockets to build out its launch capability, but its companies are working on reusable rockets now. Its single-use launches aren't as reliable or as safe from a space debris perspective as American and European ones, but they're getting the job done until reusable rockets come online. The European Union, meanwhile, is developing IRIS, a smaller constellation that occupies a combination of low and medium earth orbits. The idea is it will achieve good coverage and speeds at lower costs, with the trade off being less overall capacity and higher latency at times. IRIS is under development, but in the short term the Anglo-French Eutelsat OneWeb LEO constellation is a sovereign option for European governments that are looking for space internet. The OneWeb network currently has 630 satellites in orbit, making it the second-most mature LEO satellite constellation after Starlink. OneWeb will eventually be rolled into IRIS as the foundational LEO portion of the network. Compared to Starlink, OneWeb is more expensive and slower, but perhaps that's just the price of sovereign capability. In 2025 the German government revealed it had been funding Ukrainian access to OneWeb. At that time, even Eutelsat's chief executive Eva Berneke told Reuters that access to OneWeb was more about having a plan B than replacing Starlink. The shakiness of the transatlantic alliance has shown us that having a plan B is important. The funny thing about all this is that the LEO space internet race wasn't a geopolitical competition kicked off by a Presidential pronouncement. It was kicked off by Elon Musk wanting to launch lots and lots of rockets to occupy Mars. Now everyone needs their own constellation. It's a funny old world. AI Is "Industrialising" Cyber ThreatsGoogle's latest AI threat tracker report details how threat actors are industrialising and scaling anonymous access to premium tier models to enable large-scale misuse. In other words, even though AI companies try to prevent abuse, threat actors have developed an entire ecosystem to bypass safeguards and take advantage of free trials. In our view, this is the most significant section of Google's report from a national security perspective. Both state-sponsored and cyber crime groups are using "emerging ecosystem of custom middleware, proxy relays, and automated registration pipelines designed to bypass safety guardrails and billing constraints". The idea is that by using these techniques they can maintain anonymous access to premium model tiers to "effectively industrialise their adversarial workflows". That doesn't sound good at all! To add insult to injury, the threat actors are actually subsidised as their abuse techniques involve cycling through new accounts. They're continually taking advantage of new free trial periods! Google isn't explicit about how effective these techniques are, but it concerns us that advanced adversaries are successfully using the latest models for malicious purposes. This could include directly malicious uses such as enhancing cyber attacks, or could involve distilling advanced models. Either way, not great! This is an area where the government should bring the AI companies together to try to understand the extent and success of these safety guardrail bypasses, develop best practices and countermeasures and perhaps even run disruption operations. Without government involvement we are a bit worried that companies will portray AI guardrails as strong and robust while pushing models out to collect more revenue. OpenAI's Trusted Access approach for allowing access to GPT5.5, its latest model, sounds pretty good. But is it effective against the guardrail bypass ecosystem that Google describes? Google's report also describes all the other ways that threat actors are experimenting with AI and doing dumb or clever stuff. Experimenting is to be expected. But bypassing guardrails at scale? That needs a closer look. Watch James Wilson and Tom Uren discuss this edition of the newsletter: Three Reasons to Be Cheerful This Week:- Linux kernel killswitch proposed: This proposed new security feature would allow admins to disable vulnerable kernel functions until patches are available. It comes in the wake of two local privilege escalation vulnerabilities in the past weeks, Copy Fail and Dirty Frag. We don't know if this feature will make it into Linux, but we like that it has the potential
|