Posts about DeepMind
The Center for AI Safety released a 22-word statement this morning warning on the risks of AI. My full breakdown is here, but all points are included below for Reddit discussion as well.
Lots of media publications are taking about the statement itself, so I wanted to add more analysis and context helpful to the community.
What does the statement say? It's just 22 words:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
View it in full and see the signers here.
Other statements have come out before. Why is this one important?
Yes, the previous notable statement was the one calling for a 6-month pause on the development of new AI systems. Over 34,000 people have signed that one to date.
This one has a notably broader swath of the AI industry (more below) - including leading AI execs and AI scientists
The simplicity in this statement and the time passed since the last letter have enabled more individuals to think about the state of AI -- and leading figures are now ready to go public with their viewpoints at this time.
Who signed it? And more importantly, who didn't sign this?
Leading industry figures include:
Sam Altman, CEO OpenAI
Demis Hassabis, CEO DeepMind
Emad Mostaque, CEO Stability AI
Kevin Scott, CTO Microsoft
Mira Murati, CTO OpenAI
Dario Amodei, CEO Anthropic
Geoffrey Hinton, Turing award winner behind neural networks.
Plus numerous other executives and AI researchers across the space.
Notable omissions (so far) include:
Yann LeCun, Chief AI Scientist Meta
Elon Musk, CEO Tesla/Twitter
The number of signatories from OpenAI, Deepmind and more is notable. Stability AI CEO Emad Mostaque was one of the few notable figures to sign on to the prior letter calling for the 6-month pause.
How should I interpret this event?
AI leaders are increasingly "coming out" on the dangers of AI. It's no longer being discussed in private.
There's broad agreement AI poses risks on the order of threats like nuclear weapons.
What is not clear is how AI can be regulated**.** Most proposals are early (like the EU's AI Act) or merely theory (like OpenAI's call for international cooperation).
Open-source may post a challenge as well for global cooperation. If everyone can cook AI models in their basements, how can AI truly be aligned to safe objectives?
TLDR; everyone agrees it's a threat -- but now the real work needs to start. And navigating a fractured world with low trust and high politicization will prove a daunting challenge. We've seen some glimmers that AI can become a bipartisan topic in the US -- so now we'll have to see if it can align the world for some level of meaningful cooperation.
P.S. If you like this kind of analysis, I offer a free newsletter that tracks the biggest issues and implications of generative AI tech. It's sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
I came across a fascinating research paper published by Google's DeepMind AI team.
A full breakdown of the paper is available here but I've included summary points below for the Reddit community.
Why did Google's DeepMind do?
They adapted their AlphaGo AI (which had decimated the world champion in Go a few years ago) with "weird" but successful strategies, into AlphaDev, an AI focused on code generation.
The same "game" approach worked: the AI treated a complex basket of computer instructions like they're game moves, and learned to "win" in as few moves as possible.
New algorithms for sorting 3-item and 5-item lists were discovered by DeepMind. The 5-item sort algo in particular saw a 70% efficiency increase.
Why should I pay attention?
Sorting algorithms are commonly used building blocks in more complex algos and software in general. A simple sorting algorithm is probably executed trillions of times a day, so the gains are vast.
Computer chips are hitting a performance wall as nano-scale transistors run into physical limits. Optimization improvements, rather than more transistors, are a viable pathway towards increased computing speed.
C++ hadn't seen an update in its sorting algorithms for a decade. Lots of humans have tried to improve these, and progress had largely stopped. This marks the first time AI has created a code contribution for C++.
The solution DeepMind devised was creative. Google's researchers originally thought AlphaDev had made a mistake -- but then realized it had found a solution no human being had contemplated.
The main takeaway: AI has a new role -- finding "weird" and "unexpected" solutions that humans cannot conceive
The same happened in Go where human grandmasters didn't understand AlphaGo's strategies until it showed it could win.
DeepMind's AI also mapped out 98.5% of known proteins in 18-months, which could usher in a new era for drug discovery as AI proves more capable and creative than human scientists.
As the new generation of AI products requires even more computing power, broad-based efficiency improvements could be one way of helping alleviate challenges and accelerate progress.
P.S. If you like this kind of analysis, I write a free newsletter that tracks the biggest issues and implications of generative AI tech. It's sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
I mean, this is the official beginning of the Singularity, right?
A multi-modal robot that can learn from videos
From the article:
“It can pick up a new task with as few as 100 demonstrations because it draws from a large and diverse dataset. This capability will help accelerate robotics research, as it reduces the need for human-supervised training, and is an important step towards creating a general-purpose robot.”
What do we…do?