How’s AI self-regulation going?

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Yesterday, on July 21, President Joe Biden announced he is stepping down from the race against Donald Trump in the US presidential election. But AI nerds may remember that exactly a…

How’s AI self-regulation going?

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Yesterday, on July 21, President Joe Biden announced he is stepping down from the race against Donald Trump in the US presidential election.

But AI nerds may remember that exactly a year ago, on July 21, 2023, Biden was posing with seven top tech executives at the White House. He’d just negotiated a deal where they agreed to eight of the most prescriptive rules targeted at the AI sector at that time. A lot can change in a year! 

The voluntary commitments were hailed as much-needed guidance for the AI sector, which was building powerful technology with few guardrails. Since then, eight more companies have signed the commitments, and the White House has issued an executive order that expands upon them—for example, with a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. 

US politics is extremely polarized, and the country is unlikely to pass AI regulation anytime soon. So these commitments, along with some existing laws such as antitrust and consumer protection rules, are the best the US has in terms of protecting people from AI harms. To mark the one-year anniversary of the voluntary commitments, I decided to look at what’s happened since. I asked the original seven companies that signed the voluntary commitments to share as much as they could on what they have done to comply with them, cross-checked their responses with a handful of external experts, and tried my best to provide a sense of how much progress has been made. You can read my story here

Silicon Valley hates being regulated and argues that it hinders innovation. Right now, the US is relying on the tech sector’s goodwill to protect its consumers from harm, but these companies can decide to change their policies anytime that suits them and face no real consequences. And that’s the problem with nonbinding commitments: They are easy to sign, and as easy to forget. 

That’s not to say they don’t have any value. They can be useful in creating norms around AI development and placing public pressure on companies to do better. In just one year, tech companies have implemented some positive changes, such as AI red-teaming, watermarking, and investment in research on how to make AI systems safe. However, these sorts of commitments are opt-in only, and that means companies can always just opt back out again. Which brings me to the next big question for this field: Where will Biden’s successor take US AI policy? 

The debate around AI regulation is unlikely to go away if Donald Trump wins the presidential election in November, says Brandie Nonnecke, the director of the CITRIS Policy Lab at UC Berkeley. 

“Sometimes the parties have different concerns about the use of AI. One might be more concerned about workforce effects, and another might be more concerned about bias and discrimination,” says Nonnecke. “It’s clear that it is a bipartisan issue that there need to be some guardrails and oversight of AI development in the United States,” she adds. 

Trump is no stranger to AI. While in office, he signed an executive order calling for more investment in AI research and asking the federal government to use more AI, coordinated by a new National AI Initiative Office. He also issued early guidance on responsible AI. If he returns to office, he is reportedly planning to scratch Biden’s executive order and put in place his own AI executive order that reduces AI regulation and sets up a “Manhattan Project” to boost military AI. Meanwhile, Biden keeps calling for Congress to pass binding AI regulations. It’s no surprise, then, that Silicon Valley’s billionaires have backed Trump. 


Now read the rest of The Algorithm

Deeper Learning

A new weather prediction model from Google combines AI with traditional physics

Google DeepMind researchers have built a new weather prediction model called NeuralGCN. It combines machine learning with more conventional techniques, potentially yielding accurate forecasts at a fraction of the current cost and bridging a divide between traditional physics and AI that’s grown between weather prediction experts in the last several years. 

What’s the big deal? While new machine-learning techniques that predict weather by learning from years of past data are extremely fast and efficient, they can struggle with long-term predictions. General circulation models, on the other hand, which have dominated weather prediction for the last 50 years, use complex equations to model changes in the atmosphere; they give accurate projections but are exceedingly slow and expensive to run. While experts are divided on which tool will be most reliable going forward, the new model from Google attempts to combine the two. The result is a model that can produce quality predictions faster with less computational power. Read more from James O’Donnell here.

Bits and Bytes

It may soon be legal to jailbreak AI to expose how it works
It could soon  become easier to break technical protection measures on AI systems in order to probe them for bias and harmful content and to learn about the data they were trained on, thanks to an exemption to US copyright law that the government is currently considering. (404 Media

The data that powers AI is disappearing fast
Over the last year, many of the most important online web sources for AI training data, such as news sites, have blocked companies from scraping their content. An MIT study found that 5% of all data, and 25% of data from the highest-quality sources, has been restricted. (The New York Times

OpenAI is in talks with Broadcom to develop a new AI chip 
OpenAI CEO Sam Altman is busy working on a new chip venture that would reduce OpenAI’s dependence on Nvidia, which has a near-monopoly on AI chips. The company has talked with many chip designers, including Broadcom, but it’s still a long shot that could take years to work out. If it does, it could significantly boost the computing power OpenAI has available to build more powerful models.  (The Information