Week 20: What’s Happening in AI
OpenAI today. Resources by YC, founders, more. Distributed training and AI policy. Plus, what if deepfakes go nuclear?
Welcome to this week’s edition of What's Happening in AI, covering the latest in policy, responsible AI, and where AI is going for the Responsible Innovation Labs community.
Below: Founder tips on building boards and companies. Distributed training meets AI policy. US and UK cybersecurity guidelines. AI Insight Forum this week. Plus, what if deepfakes go nuclear?
First, we know you’ve all read thousands of words on this already, but a few notes on the OpenAI story that we found especially interesting:
Consider:
Culture and mission orientation: “The big thing is really just maintaining the culture and the mission orientation as we grow…how do you maintain that focus at scale” (Brad Lightcap, OpenAI COO; more here)
Leadership: “It should just be good self-reflection and good leadership to anticipate the failure points so as to avoid them…People did not imagine this was possible” (Josh Wolfe, Lux Capital Partner)
Corporate structure: “The leadership crisis raised questions about the company’s governance structure,” including its hybrid governance (more here)
Looking ahead, will risks of concentration of power, governance, and open systems get more focus?
What’s next:
OpenAI shares immediate priorities: advancing research and full-stack safety efforts, continuing product delivery, building out a diverse board and improving governance structures (including an independent investigation)
Policy
Americas:
Europe:
US, Britain, and over a dozen more sign an AI framework urges developers to work securely
European Central Bank: Reports of AI ending human labour may be greatly exaggerated
Asia:
Responsible AI
Company-building:
The 10-year “overnight” success story of Casetext (Y Combinator)
Responsible AI Commitments and Protocol for Startups and Investors (RIL)
Governance:
Founder tips on building boards (Brian Halligan, Hubspot Co-Founder)
Privacy and security:
What every healthcare founder should know before running ads (PhaseLab)
Assessing the security posture of a widely used vision model: YOLOv7 (Trail of Bits, more here)
Product and alignment:
Model alignment protects against accidental harms, not intentional ones (Princeton; ANU)
Europe’s first independent research lab dedicated to AI open science (Kyutai)
ICYMI: Everything you need to know about the NIST AI Risk Management Framework (Trustible)
Where AI is going
Capabilities:
DeepMind’s method for distributed training ‘breaks AI policy’ (Jack Clark, Anthropic Co-Founder)
Society and technology:
US Rep. Hill: We expect to pass crypto oversight and stablecoin bills in early 2024
China will cooperate with US on military talks, AI, but warns of Taiwan 'abyss'
Stack and ecosystem:
How much does it cost to use an LLM? (Tom Tunguz)
Opinion: Y Combinator's future in software (Samo Burja)
Releases, raises, more:
Releases: Stability AI’s SDXL Turbo (text to image), Vulavula (AI for African languages), Amazon Q (assistant), Meta’s CICERO (agent), Pika (AI video making/editing)
Raises: AI21 ($208M, enterprise AI), Cradle ($24M, protein design), Artisan AI ($2.3M, agents), Wormhole ($225M, Web3)
More: Imbue and Dell partner on $150M high performance computing system
Extra bits
A data-driven look at the rise of AI (Cerebral Valley)
3D model prediction from 2D images (Adobe Research)
What are AI chatbots actually doing when they ‘hallucinate’? (Northeastern)
ICYMI: GraphCast: AI model for faster, more accurate weather forecasting (DeepMind)