- AI Trailblazers - We moved to MailChimp
- Posts
- California's AI safety bill sparks industry clash
California's AI safety bill sparks industry clash
Plus: Ex-Meta experts launch AI tool to create new molecules
TOP 3 STORIES
Happy Wednesday! Here are today’s top 3 headlines:
💬 California’s AI clash
🧬 Ex-Meta experts unveil AI molecule creator
🇳🇱 Dutch Prince: Europe lagging in AI
Read this first: How to opt out of Meta’s AI training
Brought to you by Fireflies
Say goodbye to manual note-taking with Fireflies.ai! Transcribe, summarize, and analyze meetings effortlessly. Trusted by 300,000+ organizations and seamlessly integrating with Google Meet, Zoom, and more. Start for free or request a demo. Transform your workflow and elevate your team collaboration now!
California's AI safety bill sparks industry clash
California is making bold moves with its new AI safety bill, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, authored by state Senator Scott Wiener. This legislation aims to regulate advanced AI technologies, sparking a fierce debate in Silicon Valley.

The big picture: Proponents argue the bill is essential due to the rapid pace of AI advancements and federal inaction. Opponents, however, warn it could hinder innovation and push AI talent out of California.
Key provisions:
Safety testing: Requires rigorous testing and certification of AI products.
Legal accountability: Empowers the state’s attorney general to take action against harmful AI.
High threshold: Applies to highly advanced AI models.
Kill switch: Mandates the ability to shut down AI systems.
New division: Establishes a Frontier Model Division for enforcement.
Industry concerns: Tech leaders, including around 140 AI startup founders, argue the bill could severely impact California’s AI talent and innovation landscape. Concerns also extend to the potential stifling of open-source AI models.
The regulatory debate: The bill has ignited discussions on whether AI itself or just its applications should be regulated. AI pioneers like Geoffrey Hinton and Yoshua Bengio support the bill, emphasizing the need for proactive measures.
Federal inaction: Senator Wiener highlighted the absence of federal action on AI regulation, underscoring the need for state-level initiatives. "I'm not confident that Congress will act on AI regulation," he said.
What’s next: The bill, moving through the legislative process, is set for consideration by the Judiciary Committee next week. It’s one of several AI-related bills in California addressing various aspects of AI regulation.
California's AI safety bill could set a precedent for balancing innovation with public safety, as the state steps up to regulate AI in the absence of federal action.
Ex-Meta experts launch AI tool to create new molecules
An AI-biotech startup, EvolutionaryScale, founded by former Meta researchers, is launching a tool to help scientists design new molecules, simulating half a billion years of evolution.

The big picture: AI-driven molecular creation aims to revolutionize medicines, biofuels, and materials by accelerating the discovery process.
Driving the news: EvolutionaryScale's new AI model, ESM3, mimics AI chatbots but is designed for protein creation. It uses extensive data on natural proteins to predict patterns and design new proteins with specific functions.
How It works:
Protein structure: Protein functions are determined by their 3D structures.
Data training: ESM3 was trained on data from 2.78 billion natural proteins.
Generative AI: The model designs new proteins, creating proteins with no natural counterparts.
Key findings: The AI generated a new green fluorescent protein (GFP) with only 58% similarity to known GFPs, simulating over 500 million years of evolution.
What they're saying: "ESM3 allows us to design biology from first principles," said Alexander Rives, EvolutionaryScale co-founder and chief scientist.
What's next:
Partnerships: Collaborations with Amazon Web Services and Nvidia to offer the AI model to select customers.
Open access: The model will be available for non-commercial use through an API.
Responsible development: Rives and other researchers call for responsible development principles for AI in protein design, emphasizing the potential benefits and need for foresight.
The road ahead: The tool must prove its value to industries like pharmaceuticals, where AI's promises have historically fallen short. However, Rives is optimistic about its impact on molecular development efficiency and economics.
Dutch Prince warns Europe risks falling behind in AI
Europe risks lagging behind the U.S. and China in artificial intelligence due to its heavy focus on regulation, according to Prince Constantijn of the Netherlands.

“Our ambition seems to be limited to being good regulators,” Constantijn told CNBC at the Money 20/20 fintech conference in Amsterdam. The prince, who is the special envoy for Dutch startup accelerator Techleap, highlighted concerns that Europe's regulatory approach might stifle innovation.
The regulatory landscape: The European Union recently approved the EU AI Act, which sets stringent rules for AI applications. The Act demands transparency and strict scrutiny for high-impact AI models like OpenAI’s GPT-4, including mandatory evaluations and incident reporting.
Innovation vs. regulation: Constantijn pointed out that while regulations provide necessary guardrails, they could hinder Europe's ability to lead in AI innovation. He drew parallels with the EU's past stance on genetically modified organisms (GMOs), which resulted in Europe becoming consumers rather than producers of GMO products due to stringent regulations.
Challenges and strengths…
Constantijn highlighted several challenges Europe faces:
Data restrictions: Particularly in health and medical sectors, limiting AI development.
Market size: The U.S. benefits from a larger, more unified market with freer capital flow.
Despite these challenges, he acknowledged Europe's strengths:
Talent and technology: Europe has a strong pool of talent and robust technology capabilities.
AI applications: Europe is competitive in developing AI applications but relies heavily on large platforms for data and IT infrastructure.
Prince Constantijn urged Europe to balance regulation with innovation to avoid falling behind global competitors. He emphasized the need for clarity and predictability in regulations while fostering an environment that encourages AI development.