California Gov. Newsom Vetoes Controversial Frontier AI Bill as Non-Responsive to “Actual Risks”
4 minute read | September.30.2024
1 hour watch | October.10.2024
This week, we hosted a conversation about the future of AI regulation across the 50 states following California Governor Gavin Newsom’s veto of the controversial AI safety bill, SB 1047. Key takeaways are below.
Joining us were:
Shannon Yavorsky: Welcome everyone. Thanks for joining us today for this discussion of SB 1047 and the future of AI legislation. I'm Shannon Yavorsky, a partner in Orrick's San Francisco office, and I co-lead our AI practice. I'm thrilled to be with you here today to set the stage. SB 1047 was a significant piece of legislation that would have placed pretty strict guardrails on certain AI companies. It was passed by the California Legislature but was vetoed last Sunday by Gov. Gavin Newsom. The bill has been a flashpoint for debate, particularly in Silicon Valley for AI legislation, and has garnered reactions from various stakeholders, including tech giants and public policy experts.
On one hand, you have Elon Musk offering public support for SB 1047, saying it was a tough call and would upset some people, but that California should probably pass the law. Then you had Anthropic offering cautious support for the law but proposing a long list of amendments. On the other hand, you had Dr. Fei-Fei Li, a Stanford professor widely considered to be the godmother of AI, commenting that the bill would have significant unintended consequences that would stifle innovation. You also had San Francisco Mayor London Breed weighing in, saying more work needs to be done to bring together industry, government and community stakeholders before moving forward with AI legislation. So, it really sparked a heated debate about how to regulate AI effectively.
Today, we're going to talk about SB 1047, what the veto means, and where we're headed with AI legislation both within California and across the U.S. states. I'm going to hand it over to Jeremy Kudon, my partner in public policy, to do some intros.
Jeremy Kudon: Thank you, Shannon. Thank you to everyone who could join us on such short notice today. We are thrilled to be able to talk to you about this incredibly important outcome. And you know, as you're about to hear, Gov. Newsom's veto of SB 1047 is not going to be the final word on comprehensive AI regulation in the state legislatures. This is just going to be the tip of the iceberg. Not only do we suspect that AI will return to the California Legislature in 2025, but there are 49 other states that are not just waiting in the wings, but many are already out of the gates and turning for home.
So, I want to walk quickly through our agenda. My partner, Shannon, who you just heard from, will begin with an overview of the current state of AI legislation and regulations. Shannon is a partner in Orrick's San Francisco office and heads up our cyber, privacy and data innovation practice. She was also recently recognized, and congratulations for this, as a 2024 Woman Leader in Tech Law for her pioneering work in AI compliance and strategy.
We're then going to turn to Tom Kemp for a view of AI from the inside. For those of you who don't already know Tom, he is a Silicon Valley author, entrepreneur, and seed investor. He's also been recognized for his contributions to major privacy and AI laws, earning him commendations from the State Senate in California and Texas. For anyone who knows anything about those two chambers – Tom, you may be the only thing in the world that they actually agree upon.
I'll then take the baton, and I'm going to spend a few minutes talking about getting behind the phenomenon of AI bills in state legislatures, how the number of bills for AI compares to other white-hot issues in the legislatures in the states, and then help those of you on the webinar identify the states that will truly matter on this issue in 2025.
Finally, we are very privileged to have Ryan Harkins with us today. Ryan is senior director of public policy for Microsoft. In that role, he works on issues relating to technology and civil liberties, including state and federal privacy legislation, the regulation of AI, and voting rights. Ryan has been with Microsoft since 2007, and he is a non-resident fellow at Stanford Center for Internet and Society and serves as an adjunct professor at Seattle University School of Law. Ryan is going to provide us with an insider's perspective on the state of AI legislation, where he sees it heading in 2025 and beyond, and perhaps most importantly, ways that everyone on this webinar can engage on this issue in the state legislatures and at the federal level. Shannon, I'm going to kick it back to you.
Shannon: Thanks, Jeremy. Before we dive into California and the U.S., we wanted to take a moment to talk about the global context for AI legislation.
At the core of this debate is a really delicate challenge: how to ensure AI safety without hindering innovation. As AI technologies advance, they are increasingly viewed as critical to economic power and global influence. This pursuit of AI supremacy is often compared to a modern-day arms race, where nations and industries are competing for technological leadership, but also the potential to reshape the global economy and geopolitical dynamics. So balancing innovation with regulation is really critical.
There are two distinct global approaches to AI legislation. The first is creating a standalone law like the EU AI Act, designed to be a single law to address AI technologies. Brazil has also followed the EU approach. The second involves empowering existing regulatory authorities to oversee AI, leveraging their current jurisdiction to address AI as it falls within their established framework. For example, in the U.S. you have the CFPB, the FTC and the EEOC, who have all made it clear that they will enforce their respective laws and regulations to promote responsible AI innovation in other countries around the world. In the UK for example, the UK published a white paper saying they do not intend to enact horizontal AI regulation in the near future. Instead, they support a principle-based framework for existing sector-specific regulators to interpret and apply to the development or use of AI within their domains. It is similar to the approach taken right now in other jurisdictions. For example, Australia and Israel have adopted similar approaches to responsible AI innovation through policy and sector-specific guidelines to address core issues and ethical principles.
So right now, I’ve just provided a really high-level overview of where we are globally with respect to AI legislation. You have the single law approach on the one hand, you have the letting existing bodies regulate AI on the other. There is a lot of noise. There are hundreds of laws being proposed in the U.S. across the state and also globally. And these laws range from the mundane to regulating things that are seen to be potentially catastrophic. I think what is helpful is that the laws are in general thematic, so they fall within very specific buckets of safety, transparency – making sure that AI is labeled as such or was produced by AI – and accountability. So those are just some of the core themes we are seeing emerge in this legislation.
So, moving down to where we are with California, California often looks to what is happening in Europe. We saw that California really looked to the GDPR, the European data protection legislation, to inspire the first comprehensive consumer privacy law, which was the CCPA in California. And we often see California looking to what is happening in Europe and as a first mover on legislation. Now we are in a situation where California has 32 of the top 50 AI companies. So, this is really important to the California economy writ large. There is tremendous focus on what is happening in the California Legislature as it relates to AI. With that, I’m going to hand it over to Tom to share his insights on SB 1047 and the broader implications for AI legislation.
Tom Kemp: Thank you. In terms of my biography, I have historically worked with California privacy laws. I worked on Prop 24, and most recently, over the last few years, I've worked with Sen. Becker on two major pieces of legislation. Last year, it was the California Delete Act, which involved the regulation of data brokers, providing a common platform in which consumers can delete information. This year, I worked on the California AI Transparency Act. I proposed and drafted that, and I advised Sen. Becker on that piece of legislation. I should say that I don't speak for Sen. Becker, and I think the other thing to point out is that in the legislative process, even though you put forth an initial proposal, it goes through the legislative sausage-making process. What I originally drafted was significantly modified, and in California, in the Assembly, there is a privacy subcommittee that really likes to put their fingerprints on bills in this area, not only in privacy but also in the AI space.
Let's talk about the bills that didn't get through, and then I'll give an overview of the bills that did get signed. There were 18 laws. And I would be remiss if I did not talk about all the activity that is happening with the California Privacy Protection Agency. As Shannon talked about, SB 1047 sucked up all the oxygen out of the room. It was an AI safety bill, targeting the largest models, where people are spending a hundred million dollars. Right now, people are not spending that much money, but it is very conceivable in the near future that people would. At a high level, you ask yourself, what's it focused on? The first types of models are the ones that cause mass casualties. You would initially say to yourself, yes, we should have safety as it relates to bioweapons, radioactive nuclear weaponry, chemical stuff, etc. The other issue was that it also covered models that could cause half a billion dollars’ worth of damage. You could see that some sort of model may cause a change in a stock price or does something on a financial transaction, and it significantly broadens that.
The fundamental issue with SB 1047 was that it was incredibly prescriptive. The prescriptive nature included pre-training requirements, security measures, kill switches, testing procedures, pre-deployment requirements, post-deployment audit requirements, post-deployment compliance, incident reporting. It also formed government agencies to look after this. It was very complex, the actual bill itself. Gov. Newsom said it does not go far enough. He felt that safety should also be applicable to critical decision-making, which was covered in the Bauer-Kahan bill, 2930, and the use of sensitive data, which is being addressed to some degree by the regulations coming forth from the CPPA. That was the reason he gave to form a team of three individuals to advise him.
The other two major AI bills that didn't get through were the aforementioned 2930 from Bauer-Kahan, which provides the impact assessment, looking at bias discrimination, requiring transparency for automated decision-making, and a much more prescriptive version of SB 942 that I worked on from Assemblymember Wicks, that would require watermarking, modifications to camera software, significantly more fines, A/B testing if watermarking has been removed, etc.
There were 18 bills in total that were signed. Some law firms do their assessment and say there were 17. I'm calling it 18, and I'll tell you why. The broad categories of AI bills that came through. The first interesting thing is that California now has a uniform definition of AI. And thematically, instead of doing broad omnibus bills, the focus of the vast majority of the legislation was focused on specific harms, such as deepfakes and pornography – dealing with what’s been happening in our schools with images being made of teachers and fellow students. And significant concerns about elections and disinformation. And there was a telemarketing bill involving robocalls that must specify and label that the robocall was generated by AI.
The area I focused on was transparency and labeling. SB 942, the California AI Transparency Act, would require providers would need to a) given the option to users to put a manifest marker on the content – audio, images and video – that marks it as AI-generated. That is an option for consumers. What is required is that the large AI providers must put a latent disclosure that's not discernible, that would provide labeling of the content. It could be in the form of metadata associated with the actual files created to help people address the issue: is it real or is it synthetic? The third aspect of the bill, which I modeled after Sen. Schatz and Sen. Kennedy’s AI labeling act, is the concept of an AI detection tool, which only applies to covered providers' creation. There was some confusion that the large AI providers had to detect any and all content and determine if it was AI detected. No, it is only applicable to them. And there was an API needed for the AI detection tool. I envision that eventually a developer or something on open source could provide a generic webpage in which people could upload content and it would get 30,40,50 AI providers and ask them if this was provided. Also, take into account that the AI detection tool is in context with both the manifest and latent disclosures that are put in. So, it facilitates the ability to read the disclosures that are required. There are also some downstream licensing requirements associated with the bill. In the end, I think the tech industry saw this as a less prescriptive bill than what Assemblymember Wicks put forth. There was no technical opposition, and it was signed by Gov. Newsom.
There was another transparency bill, an organization called the Transparency Coalition, which would require developers of GenAI systems to post documentation regarding that. This is an interesting bill because there’s 12 areas in which they have to disclosure information, including source of the dataset and owners, copyright material, if the data that they’re generating content on is purchased or licensed, if it has personal information, if it has aggregated consumer information, and also the time period that it’s collected.
Last but not least, we have to look at what's going on in California in the context of AI regulations. Stepping back, I worked on Prop 24, the CPRA, there was a definition that says the CPPA will create regulations regarding automated decision-making, which in this context is AI. The CPPA has provided at least 20 pages of regulations. Over the last six months, it has been hotly debated within the CPPA board if this is too much and whether the California Legislature should be taking the lead. There was supposed to be a board meeting last Friday to discuss this, but it got pushed to November. So maybe there’s still some more discussions happening behind the decisions. Some of the concerns are, for example, unlike Colorado where a consumer could object if a decision is fully made based on AI, the language is looser in California. The point is, you have to look at what’s happening in AI, not only in the context of these 18 laws, but also in the context of what the CCPA is doing. It is a pretty in-depth set of regulations if approved.
Jeremy: I do have some questions. Tell us more about the disclosure act. What was the debate over text and whether to have AI-generated text have a disclosure? I'm sure for schools, this has become a big issue.
Tom: It’s difficult. Google SynthID can do a much better job. In the context of conversations with industry, and it was a very collaborative process, and we were able to get no formal opposition lodged to the governor regarding this, but text is a tough issue. At the very minimum, we wanted some sort of disclosure, but it just became too difficult. One thing I also wanted that got cut was the ability for consumers to ask a chatbot if it was AI or not. But 942 was not designed to solve all issues regarding is it real or synthetic. We did this bill knowing there were over 30 bills submitted, and some were focused on specific harms like deepfakes and pornography, elections and disinformation. We wanted to provide a general platform and tools for consumers, but we realize there are specific harms that do need more focused bills.
Jeremy: With 1047's veto and the promise from both the governor and Sen. Wiener to work on a new bill, how can people engage in the process to have more of a voice? That was a bit of a frustration from the industry’s perspective, that a bill passed in May that not many members of the industry being regulated had any idea about.
Tom: I think Weiner would probably disagree with that because I testified three times on behalf of 942 and oftentimes in the same committee, 1047 was there. I think he went through how many times he talked with people. But the issue is that no one really knows what the trio of experts will do in terms of how empowered they will be and to what extent that they will take industry feedback. One of the people you mentioned actually opposed the bill so it’s to be determined what the trio of experts will do and how much power they have. But the Gov. Newsom said he will continue to work with the Legislature on this critical matter during its next session. And this past year, the Legislature proposed over 30 bills, 18 got through.
So, I would not be surprised if Wiener came back with another AI safety bill and Bauer-Kahan with their bias discrimination bill. I would not be surprised if there is another 25, 30 bills and there could be a stalking horse to Weiner’s bill with AI safety, much like Wicks and Becker had a labeling bill. So that is very possible. And then you have to look into the context of whether or not the AI regulations come through. Because one of the issues that Newsom brought up was what about the coverage of personal sensitive information. A lot of things are going to happen… But I do know the Legislature is going to come back with another 25-30 bills. I’m confident based on what people are talking about that there is going to be another wave of AI-related laws.
Jeremy: Are there other states that you think will take on the disclosure act or follow the disclosure act? Do you think this will be something all 50 states will have, or is it enough that California has it that the industry will essentially be subject to it everywhere?
Tom: You could argue that California's law should cover all large providers. But I do know for a fact that a lot of the civil society groups are looking to provide model laws based on some of the things that were passed in California. I would not be surprised that, coming from the 18 bills, there will probably be three, four or five proposed model state laws that heavily leverage that. So, I think you will see in the next two or three months, four or five model bills being proposed by some of these same civil society groups that will be shopped around. It is very likely that you'll see specific harms in election, deep fakes, labeling, and transparency. We have not gotten to the stage of having an AI safety law of significance being passed that would then become a cut-and-paste model that people could use.
Jeremy: Well, thank you so much, Tom. This bill is so important, especially in a world of deep fakes and the danger of those. So, thank you so much for everything you did on this.
I want to start with something that was once a fairly provocative statement. Today it is accepted as a matter of fact, and that is, that it's become almost impossible to pass a bill through Congress. And if you don’t believe me, just ask our friends in the crypto and cannabis industries. These days everything, and I mean everything, is done in the 50 state legislatures. So, if you like your laws or regulations with a side of chaos, welcome to the United States in 2024. As Shannon alluded to earlier, few issues have created more chaos than the regulation of artificial intelligence.
What has happened in the states with respect to AI is simply unprecedented. Let's start with 2022, which is the year that ChatGPT went mainstream. That year, 67 bills were introduced. That is a decent number. A year later, that number tripled to 190 bills introduced. That is a much bigger issue. Then a staggering 725 bills were introduced in the last nine months of 2024, and that is just a silly number. How silly? Let's compare AI right now to two other issues that have dominated the state legislatures for the past eight years or so: sports betting and data privacy. With sports betting, the high watermark was 150 bills in 2020. A lot more activity earlier on with sports betting, probably because of the pent-up demand for that, because a federal law had to be repealed before states could actually do anything on it.
Then you have comprehensive data privacy, an issue that many of you are familiar with. Many of you have transitioned or do both right now, between data privacy and AI. The high watermark was 300 bills in 2020. Not many bills passed that year, anyway, because of COVID. But if you look at the years after that, we are hovering around 200 bills. It took a couple of years for data privacy or comprehensive data privacy legislation to really get momentum in the states. We did not see real action until the last two years. With AI, you are seeing this trend. I think we are only going to go even farther than 725.
This begs the question, why is AI so very popular in the state legislatures? There has been a lot of ink spilled over the years about what motivates a legislator to introduce a bill and what leads to a bill being passed. A lot of conversation about contributions, about outsized influence. I have been doing this for 15 years in state legislatures, helping companies pass bills or sometimes shape legislation. The number one factor for any legislator will always come down to: Is this something that my voters or the voters in my district care about? And for governors, is it something that my state cares about? This January 2024 poll from the AI Policy Institute tells you pretty much all you need to know. When asked what candidate a voter would prefer, 76% said they prefer a candidate who says the government should pass legislation regulating AI because the risks it poses are so large. I see polls on all sorts of issues all the time. You just never see numbers quite like this. I don't think we could get 76% of people to agree on what day of the week it is, much less on a policy issue like this.
Then again, maybe it’s not that surprising. For the past 40 years, popular culture has led people to believe that AI is dangerous and could lead to something like, well, the Terminator. It is going to take time to change that perception, and until that perception changes, I think it's safe to assume that we're going to continue to see between 600-800 AI bills in the state legislatures each year. Many of these bills are deep fake bills, something that Tom's legislation will help address. So, we are really talking maybe 100 or 200 comprehensive bills. But for any of you who have tried to track this issue with monitoring services or just on your own, those 700 bills still will populate your fields when you're searching for AI legislation. It makes it almost impossible to track legislation and identify the bills that actually matter. If you have 700 bills that you're tracking, you can miss an SB 1047, the bill that actually is going to start to gain steam.
What I'd like to do is walk through how to make this number more manageable. You are still going to have 600-800 bills next year. There is no way to stop that. That is going to be the case for the next two or three years. But what I try to do for clients is make this more manageable by limiting that number to eight or 10-12 states that have the highest probability of passing a bill like SB 1047 that actually means something to your businesses or to your company, and that could potentially serve as a precedent for other states to follow. This is not going to just reduce the number of bills that you need to follow, but as Ryan's going to discuss in a few moments, it's going to give you a chance to help shape those bills.
When companies ask me to do this on any issue, I look for something that was similar, an analog issue. For car sharing, an issue we worked on with Turo and Getaround, we looked at what states had Uber and Lyft bills and took action on those bills. With sports betting, we looked at which states passed fantasy sports bills. It was almost aligned; 24 of the 25 states that passed online sports betting had fantasy sports bills. With video streaming taxes, we used taxes on cable or satellite. Our success rate is about 80%.
With AI, I can't think of any better example or analog than data privacy. The history of the issue of data privacy is almost identical to AI. Everyone started by thinking that data privacy would be regulated by Congress or there would be comprehensive legislation at the federal level. That did not happen. So, Europe passed the GDPR, California followed with the CCPA, and 19 other states since then have passed their own comprehensive data privacy laws. Some look like California, some don't. That is important because something very similar has been happening here. We have the EU AI Act, we almost had SB 1047, and we have all these states following that same path.
Let's populate our map to account for this analog. We have already eliminated 30 states from consideration just by focusing on states that have passed comprehensive data privacy legislation. I think we should add two more states to the mix: Illinois and New York. While neither has been successful in passing data privacy yet, Illinois has passed its biometric law, which is almost a predecessor to CCPA, and New York is one of the only states that actually requires crypto companies to get a license to operate in the state. Aside from California, it had the most AI bills in the country in 2024.
This is still far too many states. What I want to do is get us down to 14 states. Here we go. I removed Montana, Delaware, Iowa, Kentucky, Nebraska, New Hampshire, and Rhode Island. They are all great states. They just don't meet one of the criteria, which is they're not that big. Even for the ones that are big, they are not usually looked at as states that are precedential in nature, that is, that other states will follow. States that you see in red are all states that we believe are precedential. When you see that little siren next to it, those are states that we think could be particularly active in 2025. California, of course, will come back. Colorado should have a siren. The governor has said that he is going to amend the comprehensive data AI law that they passed last year. They are the only state in the country with a comprehensive AI law. Oregon, we have heard a lot of chatter about doing something here. Texas is a two-year legislature, so next year is the only year in the next two years that they will be dealing with AI, and they just passed a data privacy law. Connecticut was an early data privacy state and almost passed a bill very similar to Colorado's last year.
That still has 14 states. I recognize that is still a lot for many of you, especially the startups who are on this call today. I've got one more trick for you, and then I'm going to turn it over to Ryan. The way I organize every state legislative campaign at a national level is by creating tiers. I usually go up to four tiers, but we are going to do three today.
Tier 1: These are the states that are most likely to act. They are too big to ignore, and there are other motivating factors. They are also the precedential states. These are states that other states will say, “Oh, I want to do AI. Let's see what Maryland did or what California did, or Texas did.” As you can see from this list, there are eight states for this group. These are the eight states that if you are only going to focus on eight states, focus on these. That will likely take your 750 bills down to maybe 75-80 bills. That is a much more manageable number to track, and, more importantly, only eight states to actually have to engage in, however you choose to do that.
Tier 2: These are states that have the potential to act. Many of them are some of the biggest states in the country. They have a precedent risk, but just based on their experience with data privacy, a similar issue, and other issues that those legislatures are dealing with, or just the dynamics of the legislature. Florida, for instance, has a much bigger issue they are dealing with right now, and that could end up taking over their entire 5-month session next year. But you look at that: Florida, Illinois, Indiana, New York, Tennessee, and Virginia.
Tier 3: There is definitely a potential to act. It is not out of the realm of possibility that Delaware could pass a comprehensive AI bill. It is just not going to be as precedential as the other states will be. This is not about how important the state is versus another. It is also about just your time and your ability to engage. This is one of the hardest things about dealing with 50 state legislatures.
With that, I'm going to turn it over to Ryan, who's going to give his perspective. He has been on the front lines of this for the last three years at least. And also, how you can engage. I think, Ryan, you are going to talk about that as well.
Ryan Harkins: Sure. Thank you, Jeremy. And first, thank you to Shannon and the Orrick team for having me to talk about one of my favorite topics. Jeremy highlighted the fact and put his finger on that there is a confluence of both a policy problem and a political problem. If you take the latter first, and this has been the case for tech regulation for several years. If you go and talk to any state lawmaker, and this is true of Republicans and Democrats, it's true no matter whether they're from a big state like California or a smaller state like Delaware, they will tell you that they are motivated to act in this space in part out of a concern that Congress has not and will not act. We have seen this in other spaces. We at Microsoft have been calling, for instance, for comprehensive privacy legislation since 2005, and we are still waiting for Congress to act, despite the fact that, as Jeremy put it, 20 states have now enacted comprehensive privacy laws. And we will inevitably see more states enact comprehensive privacy laws this next year. Inaction at the federal level is leading to action in state capitals.
There are a host of legitimate policy concerns that constituents, and then ultimately, lawmakers are raising about AI. We at Microsoft are obviously very excited and bullish about the potential of AI. But we also acknowledge concerns people are raising and are clear-eyed about those concerns. My team was also tracking over 700 pieces of legislation in state capitals related to AI this last year. That is not an accident. We are seeing that state lawmakers are increasingly collaborating with one another across states. Many of the lawmakers that are active in AI legislation were also the same people who were active in sponsoring privacy laws. Several of them have organized a working group, an informal working group. Sen. James Maroney from Connecticut, Sen. Rodriguez from Colorado, Rep. Capriglione from Texas, among others, are part of this effort to focus on a number of different issue areas. They formed subcommittees. They are proposing legislation. They are motivated to ensure that what, in their view, happened with social media, meaning that it was largely unregulated, does not happen with AI. They produced bills like Connecticut SB 2, which the Colorado AI Act was ultimately modeled upon. What we have heard is that they intend to run similar bills or have similar bills introduced and run in multiple states next year, which means that there will be a lot more activity for people like Jeremy, Tom, Shannon, and me to work on.
California, in some ways, look, as California goes, perhaps the rest of the country will go. The range of different bills we saw in Sacramento this last year was, in a lot of ways, a microcosm for the range of different types of bills we saw across the country. SB 1047 was, to put it mildly, a bit polarizing. As was noted earlier, virtually all of tech and the VC community, and others raised serious concerns with the bill. We ultimately were not surprised by Gov. Newsom's veto. Just reading the tea leaves leading up to his decision, we thought it seemed likely he would veto it, although it was far from a foregone conclusion. It is really worth reading his veto statement. Gov. Newsom did not simply come out and say that the idea of implementing safety and security protocols relating to AI is a bad idea altogether. He instead focused, in particular, on the threshold for regulation. The fact that the trigger was, by and large, the cost and number of computations that a model would engage in. He went on to say that he thinks there is a role for California to play in this space. Not only that, but he has started an initiative with some very influential players, Dr. Fei-Fei Lee with Stanford, Shankar Sastry, the Dean of UC Berkeley's College of Computing, Data Science, and Society, etc., to lead efforts to figure out what California should do when it comes to safety protocols for the deployment of GenAI.
It is a long way of saying that I think it would behoove the industry, and there are industry players who are doing this, to think hard about not just what you don't like or oppose, but what you can be for. That is an approach that we at Microsoft have tried to take on a number of different issues and are trying to take in the AI space. We engaged with Sen. Weiner throughout the legislative process on SB 1047. We were asked very early on by Gov. Newsom and by Sen. Weiner to provide feedback on his proposal. I think Sen. Weiner is a very smart and thoughtful and dedicated lawmaker, and his bill, for those of you who followed it closely, changed dramatically through the course of the session, and we certainly appreciated all of the engagement that he and his office had with us. Ultimately, we told him that we couldn't support the bill because, fundamentally, we think issues relating to safety and security protocols for models need to be handled at the federal level. Although I usually agree with lawmakers who roll their eyes about whether the Feds will do anything on relevant tech issues, I actually think there's more happening in DC on model safety and security than perhaps is widely recognized.
We are without question going to see more legislation on a range of different topics, not just in California, but in other states across the country. I think it would behoove the industry to get involved. If you are not a member of a trade association, you should be one. If you don't have a lobbyist or aren't engaged in the process, particularly in your home state, you should get engaged because there is a real need to educate policymakers about how the technology works, to educate policymakers about the upside the technology brings, while also being willing to sit down and acknowledge problems and to try to work on solutions. It reminds me, Jeremy, of the old phrase that I have heard before, that no matter what happens, the lawyers win out in the end. There will be plenty of work this next year, and moving forward, for people who work in this space.
Jeremy: Ryan, thank you. I'm so interested to hear about the work at the federal level. Can you talk a little more about that? Because you heard me at the beginning just basically saying nothing ever will happen. But it is encouraging to hear that this might be the exception that proves the rule.
Ryan: There is a lot of activity that's transpired certainly over the past 12 months on the creation of safety and security standards for models. You can go all the way back to the voluntary commitments that the White House announced in July of last year from leading AI companies to advance the discussion around safety and security of frontier systems. Those include requirements to test systems for safety and security risks. Those voluntary commitments ultimately fed into the Executive Order that the Biden Administration issued in October of 2023. The United States government has worked through or with the G7 to develop a code of conduct for developers of advanced AI. We have seen the federal government stand up the U.S. AI Safety Institute, which is part of a growing network of safety institutes around the world. They are going to convene in San Francisco at the end of November to launch work streams on how to do capability evaluation and risk assessments for advanced systems. This was one of the concerns we raised with Sen. Weiner about his bill, namely, that while we are all concerned about the potential for critical harms relating to AI systems, the science and measurement techniques that will be required to evaluate those types of risks and those potential harms aren't yet there. There is more work that needs to be done to develop those things. We have even seen Congress, albeit I share my healthy dose of skepticism about what Congress can or cannot do, but we have seen them stand up AI task forces and begin to draft bipartisan AI bills. There is a lot of activity that is happening in DC and internationally to advance this particular part of the AI conversation.
Jeremy: Great. I'm going to ask one more question before I open it up to the entire panel for questions. Influential trade associations – you mentioned it broadly. I was wondering if you had a few that you could recommend?
Ryan: It really depends on the state that is relevant and where you're operating. In some respects, all politics is local. In California, I certainly think the CalChamber has done a good job of organizing the business community's feedback and concerns on a range of different bills. We are a prominent member of BSA, the Business Software Alliance, which I think is best in class in terms of its substantive chops and the like.
There is a broader recognition of the role that states are playing in the policy space and perhaps the outsized influence that state capitals have right now, and the extent to which what is happening in one venue is influencing what happens in another. We know that state lawmakers are collaborating with one another and sharing ideas on what regulation should look like in this space, and not just through traditional groups like NCSL or CSG, but through this task force or working group they have stood up.
We also know that state lawmakers, in particular, are being influenced by what's happening in the EU, both indirectly because they can see the news and read the EU AI Act, but also in some ways directly. We have had lawmakers in Sacramento tell us that they are, in fact, communicating with and sharing ideas with EU lawmakers in Brussels. We know the EU has an office in San Francisco and a senior envoy to the United States focusing on digital and tech issues.
If for no other reason than the sort of precedential effect, and the fact that what is happening in state capitals is influencing, perhaps, what happens in DC, in Brussels, and elsewhere, and vice versa, it's worth the business community and tech more broadly getting more engaged in these policy conversations in state capitals.
Jeremy: Great. I'll give a plug really quick, and then I'm going to go right to Shannon and to Tom as well to ask them a couple of questions. If those of you are looking like, “Hey, how do I track this?” Just go to Orrick.com. On the front home page is an AI Resource Center. You can just click on it; it tracks legislation. It is a good opportunity to keep in touch with what's going on. We are also going to add to that the names of the governor, senate president, and speaker for each of the states that we identified today – the 14 states. We will have that up in the next several days.
If worse comes to worst and you can't find the trade association, feel free to reach out. You can also write a letter and tell them what your view is. In many instances, the staffer who is monitoring that email, in about 20% of times, will get that email and might reach out and say, “Hey, I'd like to hear your view on this,” because AI is incredibly complicated, and they're looking to hear the views of industry from people like yourself, from people like Ryan, obviously. That is just another way. It is more of a do-it-yourself way. Again, feel free to reach out. That is the plug.
Let's now go to Shannon. Early on in the conversation, there was a question about 1047 and whether this was a good or a bad thing for the AI industry. I know that is a loaded question. I understand, as you pointed out at the beginning, there are both sides to this, maybe even three sides. What is your view? Was this a good or bad thing that happened for the industry?
Shannon: Thanks, Jeremy. I think, as Gov. Newsom said, the bill was really well-intentioned. It certainly had safety central to the development of that legislation. But it really didn't take into account whether an AI system is deployed in high-risk environments involving critical decision-making or the use of sensitive data. Instead, it applied really stringent standards to even basic functions, which just seemed not exactly right. At this point, it is really important that the legislation that's passed is very thoughtful. Similar to what Newsom said, we need guardrails, but they need to be clear and calibrated to the risks. I think that is what the trio, the panel of experts, is going to go away and consider how to contour a law that will address the flaws we saw within SB 1047.
Jeremy: Tom, do you have a view?
Tom: I think 1047 was too prescriptive and maybe too broad. It was probably too much of a platform. In California, the only major tech platform was the CCPA, and that had the threat of Alastair Mactaggart in 2018 putting a ballot initiative on. Just wearing my policy hat and considering what you can get through, I think it would have been better to chunk up the AI safety bill into maybe two, three or four bills and then snap it on for specific harms, making it more of a Lego building block that built on top of additional Lego pieces, like the CCPA as amended by CPRA.
What I've personally found is that if you try to introduce your own omnibus platform, it gets everyone involved. If there's objection to one piece or part, then the whole thing goes down. From a realpolitik perspective, it could have been chunked up. You look at examples like the California Delete Act, which was built on top of the pre-existing CCPA and the Data Broker registry, and we snapped on top of that. Sen. Stern came out with a cyberbullying bill that snapped on top of a pre-existing piece of legislation.
To what Shannon said, it was definitely laudable what was trying to be accomplished. I just think it could have focused more on particular harms and been less prescriptive, as opposed to trying to propose too much of a broad platform for tech regulation. Those are very difficult things to get through. Hopefully, Sen. Weiner or some other politician, or the trio of people that Newsom appointed, can actually do what got passed, which was 17 or 18 specific harm bills that worked. For the most part, there was great collaboration between the policymakers and industry to get those through. A good example of that is SB 942, the bill that I worked on. There was great collaboration, and there was acknowledgment that there is harm happening, in this particular case with deep fakes and consumers not being clear about what's real and what's synthetic.
Jeremy: Ryan, I'm going to end with you. What are the give states that you think people should be focused on the most? Or maybe even three. I gave eight to 14; I hope that's not too hard.
Ryan: It is a fair question. We are still thinking through on our end how to prioritize all the activity we're going to see next year. Just off the top of my head, California obviously will continue to be important. I think several of the states you mentioned are going to be important. Colorado passed the Colorado AI Act, but Gov. Polis, General Weiser, and Sen. Rodriguez have all indicated that they want to go back and revisit it next year. So that certainly is a place worth paying attention to. Sen. Maroney in Connecticut continues to be very influential in this space, and he is someone who, from our vantage point, is very thoughtful, reasonable, and hardworking. He will inevitably come back with whatever the next iteration is of SB 2, a bill that we at Microsoft supported on the record. We thought it was a very reasonable framework. New York and Texas are both states certainly worth paying attention to. We know in Texas, Rep. Capriglione is working on drafting a bill of some sort and has been accepting stakeholder feedback on what that could look like. With over 700 bills introduced last year and more expected this next year, there will inevitably be states that we're not anticipating where we'll see bills that are active and moving.
Jeremy: This is great. Hopefully, we can all get together maybe in January or February and revisit this. Have a follow-up for everyone. Thank you so much. Appreciate everybody who was able to make it. Thank you, Tom, Ryan, and of course, Shannon. Really appreciate your time.
Ryan: Thanks, all.
4 minute read | September.30.2024