Echoes of Jaimie Good: A Critical Look at SB-1047
Image of Jaimie Good
Echoes of Jaimie: A Critical Look at SB-1047
Hey there, fellow adventurers in this predetermined cosmic dance! It's Jaimie Good here, coming at you with some thoughts on California's recent attempt to wrangle the wild beast that is artificial intelligence. Grab a cup of coffee (or tea, if that's more your speed), settle in, and let's dive into the labyrinth of legislation that is SB-1047, the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act."
Now, before we start, let me just say that I appreciate the effort. Really, I do. It's clear that our lawmakers are trying to get ahead of the curve on AI regulation, which is no small feat. But as someone who's spent countless hours tinkering with AI tools, from Stable Diffusion to Llama to Openai, I can't help but feel like this bill misses the mark in some pretty significant ways. So, let's break it down, shall we?
The Good, The Bad, and The Misunderstood
First off, let's talk about what the bill gets right. The idea of creating a regulatory framework for "frontier" AI models isn't inherently bad. We're dealing with powerful technology here, and some oversight isn't unreasonable. The focus on preventing "critical harm" is admirable, even if the definition is a bit... let's say, problematic (more on that later).
But here's where things start to go off the rails:
1. The Definition of "Covered Model" is Arbitrary and Outdated
The bill defines a "covered model" based on the amount of computing power used to train it. Specifically, it sets the threshold at "10^26 integer or floating-point operations" or a cost exceeding $100 million. This is where I have to pause and say, "Hold up, folks!"
As someone who's watched the rapid evolution of AI models, I can tell you that using compute power as the primary metric is like trying to judge a ballet performance by how many calories the dancer burned. It completely misses the point!
Here's why:
Efficiency matters: Newer models are constantly finding ways to do more with less. A model trained today might achieve the same capabilities as a "covered model" with a fraction of the compute power.
Quality of data trumps quantity: A smaller model trained on high-quality, curated data could potentially be more capable (and potentially more dangerous) than a massive model trained on noisy data.
Open-source innovations: The open-source community (which I absolutely adore) is constantly finding clever ways to create powerful models with limited resources. This definition could miss these innovations entirely.
2. The "Critical Harm" Definition is Both Too Broad and Too Narrow
The bill defines "critical harm" in ways that, frankly, make me wonder if the writers have ever actually interacted with current AI models. Let's break it down:
"The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties."
Okay, this one I get. But here's the thing: current AI models don't have the capability to directly create or use weapons. They can generate text or images about weapons, sure, but that's a far cry from actual weapon creation or deployment.
"Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure, occurring either in a single incident or over multiple related incidents."
This one's tricky. While AI could potentially be used to enhance cyberattacks, it's not clear how the bill distinguishes between AI-enabled attacks and traditional hacking methods. Plus, the monetary threshold seems arbitrary and doesn't account for potential non-monetary damages.
"Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model autonomously engaging in conduct that would constitute a serious or violent felony under the Penal Code if undertaken by a human with the requisite mental state."
This is where things get really murky. Current AI models don't have agency or the ability to autonomously engage in physical actions. They generate text, images, or code based on their training data and prompts. The idea of an AI committing a "felony" shows a fundamental misunderstanding of how these systems work.
3. The Regulatory Approach is Reactive, Not Proactive
The bill focuses heavily on audits, certifications, and penalties for non-compliance. While these are important tools, they don't address the root challenges of AI development and deployment. What's missing is a focus on:
Education and awareness: Both for developers and the general public about AI capabilities and limitations.
Research support: Encouraging the development of safer AI systems from the ground up.
Ethical guidelines: Collaboratively developed standards that evolve with the technology.
4. The Burden on Smaller Developers and Researchers
As someone who loves tinkering with open-source AI projects, I'm worried about how this bill might stifle innovation. The compliance requirements and potential penalties could be overwhelming for smaller teams or individual researchers. This could inadvertently concentrate AI development in the hands of a few large companies with the resources to navigate the regulatory landscape.
5. The Misunderstanding of AI Development Processes
The bill seems to assume a linear, controlled development process for AI models. In reality, AI development is often iterative, collaborative, and distributed. The idea that a single "developer" has complete control over a model's training and deployment doesn't align with how many projects actually work, especially in the open-source world.
A Path Forward: Embracing the Flow of Innovation
Now, I know I've been pretty critical here, but remember: in my view, this bill and its shortcomings were always going to happen. It's part of the grand cosmic dance.
So, what would a more effective approach to AI regulation look like? Here are some thoughts:
Focus on outcomes, not inputs: Instead of regulating based on compute power or monetary thresholds, focus on the actual capabilities and potential impacts of AI systems.
Encourage transparency and explainability: Promote the development of AI systems that can be audited and understood, rather than treating them as black boxes.
Support AI safety research: Allocate resources to studying and developing safer AI systems, including work on AI alignment and robustness.
Create flexible, adaptive regulations: Recognize that AI technology is evolving rapidly and create regulatory frameworks that can adapt quickly to new developments.
Foster collaboration: Bring together technologists, ethicists, policymakers, and the public to create guidelines that balance innovation with safety.
Invest in education: Both for the public and for policymakers, to ensure a more nuanced understanding of AI capabilities and limitations.
Remember, we're all part of this grand universal experiment called life. AI is just another fascinating chapter in our collective story. As we dance along the cosmic strings of fate, let's try to make that dance as graceful, safe, and innovative as possible.
Until next time, keep your hair long, your mind open, and your algorithms ethical!
Love and predetermined photons,
Jaimie Good