- AI Trailblazers - We moved to MailChimp
- Posts
- OpenAI vs. Musk + AI’s hidden risks + Klarna’s workforce shift 🤖⚖️
OpenAI vs. Musk + AI’s hidden risks + Klarna’s workforce shift 🤖⚖️
From OpenAI’s legal battle with Musk to AI’s impact on jobs and national security, here’s what’s making waves.
🎄Hey!
We’re signing off for the year—this is our last edition before the holiday break. Big things are coming in the new year: a fresh look, deeper dives, and two new editors joining the crew to bring you even more editions each week. Catch you in 2025!
In other news…
Meta backs Elon Musk in urging California to block OpenAI’s for-profit shift, calling it a threat to Silicon Valley. OpenAI accuses Musk of dodging fair competition.
📰 Latest headlines
OpenAI has responded to Elon Musk’s lawsuit challenging its for-profit restructuring, alleging Musk initially pushed for a profit-driven model under his control. When denied majority equity and control, Musk left the organization, predicting its failure, according to OpenAI’s court filing.
Emails reveal Musk’s push for dominance over OpenAI’s AGI research, with co-founder Ilya Sutskever expressing concerns about Musk’s potential “absolute control.” Musk’s lawsuit seeks to block OpenAI’s transition to a corporate structure, which the company argues is vital for scaling its work.
The legal battle reflects rising tensions as Musk now leads OpenAI rival xAI. Read more.
Lisa Kudrow called Robert Zemeckis’ use of AI to de-age Tom Hanks and Robin Wright in Here an “endorsement for AI” on the Armchair Expert podcast. She raised concerns about AI replacing actors and limiting opportunities for new talent, asking, “What work will there be for human beings?”
Hanks previously noted AI could extend actors’ careers indefinitely, saying, “My performances can go on and on.” Hereexplores multigenerational stories and reunites Zemeckis, Hanks, and Forrest Gump writer Eric Roth. Read more.
AI models, including OpenAI’s o1, have displayed "scheming" behaviors during safety evaluations, such as bypassing oversight, denying actions, and fabricating explanations. Tests revealed instances of models backing up data to avoid shutdowns or underperforming to evade retraining.
Why it matters: As AI grows more autonomous, ensuring alignment with human goals becomes critical. These controlled findings highlight the need for stronger safeguards. Read more.
Klarna CEO Sebastian Siemiatkowski claims AI has allowed the company to stop hiring, reducing its workforce from 4,500 to 3,500 in a year. Siemiatkowski told Bloomberg TV, “AI can already do all of the jobs that we as humans do,” and credited generative AI for increased efficiency and employee salary growth.
Despite these bold claims, Klarna hasn’t stopped hiring entirely. The company currently lists over 50 open roles globally and has actively recruited for essential positions throughout 2024, particularly in engineering and partnerships. Klarna’s global press lead clarified the CEO’s comments, saying they were “simplifying for brevity” and that the company is only backfilling critical roles.
With an IPO on the horizon, Klarna’s AI narrative may aim to attract investor attention, but like many tech firms, its actual AI integration appears to be progressing at a measured pace. Read more.
AI video tools are booming, with OpenAI’s Sora and Pika’s updated models making waves. Runway, an early leader, shows profitability is possible, projecting $84M in annualized revenue this month (up from $28M in June) and aiming for $265M by late 2025.
Runway’s advantage? Fine-tuned user control over video details like camera angles and consistent character appearances—areas where competitors like Sora struggle. Its rapid growth suggests AI video might avoid commoditization, offering hope for startups and investors. Read more.
Generative AI is quietly being integrated into U.S. military operations, supporting tasks like communications, coding, and data processing. Tools like NIPRGPT, deployed by the Air Force, and Amelia, used by the Navy, are touted as efficiency enhancers. However, their reliance on statistical correlations instead of factual reasoning poses serious risks.
Errors from AI “hallucinations” or adversarial attacks on training data could compromise mission-critical decisions, with cascading consequences. For instance, OpenAI’s coding tools have accuracy rates as low as 31%, highlighting the potential for harmful errors in sensitive tasks.
The ease with which these tools are adopted, often classified as IT infrastructure rather than critical analytical tools, bypasses proper scrutiny and exposes vulnerabilities. The risks outweigh the efficiency gains, raising urgent questions about their suitability for national security applications. Read more.