Week 21: What's Happening in AI
Stories on the latest in policy, responsible AI, and where AI is going for the RIL community. Plus, is there ground between e/acc and decel?
Below: Stories on the latest in policy, responsible AI, and where AI is going. Plus, is there middle ground between e/acc and decel?
First, how industry leaders are shaping practical resources on responsible AI (more here):
Policy
US Senate AI Insight Forums #8 and #9 wrapped up: “Senate co-hosts indicated that key committees will begin ramping up efforts to craft bipartisan AI legislation in the coming months.” Among the first, a House committee released a landmark report arguing for much more extensive investment restrictions in US-China private markets.
EU AI Act: deal on comprehensive rules for trustworthy AI: “Safeguards agreed on general purpose AI; limitation for the use of biometric ID systems by law enforcement; bans on social scoring and AI used to manipulate/exploit user vulnerabilities” – full text to be published in 2024 (more here)
Commentary on China’s AI regulation: “Beijing Internet Court’s ruling that content generated by AI can be covered by copyright has caused a stir in the AI community, not least because it clashes with the stances adopted in other major jurisdictions, including the US”
Brookings’ cluster analysis of national AI strategies: finds “certain countries are prioritizing the realization of the promises of AI while others are more concerned with mitigating its risks” after analyzing 30+ countries as high, medium, low on six factors (ex. data management)
Responsible AI
Startups are the missing perspective in the AI debate: “It's incumbent, not just for the administration, but members of Congress, to get out and hear the earliest-stage innovators, what they're worried about and have a frank conversation.” – Gaurab Bansal, RIL
OpenAI outlines its work on Safety Systems: “addresses emerging safety issues and develops new fundamental solutions to enable the safe deployment of our most advanced models and future AGI, to make AI that is beneficial and trustworthy”
Intel goes after responsible AI with ethics index patent: aiming to prevent legally and ethically inappropriate uses of AI, “the system uses a ‘trust mapping engine’ that’s given policy thresholds as guidelines for mapping training data to inference data”
Resources: Converge 2 company applications open
Where AI is going
How Sam Altman's OpenAI drama highlighted the debate splitting Silicon Valley: Are you an e/acc or decel?: “That's not to say there isn't a middle ground…‘If you haven't built responsibly and with trust you stand a high risk of wiping out your shareholders,’ said Gaurab Bansal, the executive director of RIL. His advice to Silicon Valley: ‘I would encourage any watchers of the industry to not get stuck in a divisive set of rhetoric’” (more here)
Vertical AI: “We’ve seen vertical-focused startups think outside of traditional SaaS models and instead employ strategies like embedded payments (Toast and Shopify), advertising, and B2B marketplaces. Adoption of AI will accelerate this transformation”
Releases: Mistral’s platform services; Cerebras’ gigaGPT; Microsoft’s Phi-2; Google’s AI Studio; Delphi; demo of Tesla’s Optimus Gen 2
Raises: Chalk ($10M seed, data); Hyperlane ($6M seed, user behavior); Barnyard Games ($3.4M seed, games); Andalusia Labs ($48M Series A, digital asset risk); PursueCare ($20M Series B, telehealth); open source AI grants; Playground ($410M fund); Partech (€360M fund)
More: OpenAI partners with Axel Springer to deepen use of AI in journalism (publishing house of POLITICO, Business Insider, more); Microsoft and labor unions form ‘historic’ alliance on AI
ICYMI
5 steps to build your first GTM playbook (from SaaStr’s Workshop Wednesday series)
Hilary Mason, Hidden Door co-founder, on deterministic vs. probabilistic, and how AI is changing storytelling
IBM and Meta launch the AI Alliance to support open innovation and open science in AI