Jan 13, 2023
Andy and Dave discuss the latest in AI and autonomy news and
research, including a report from Human Center AI that assesses
progress (or lack thereof) of the implementation of the three
pillars of America’s strategy for AI innovation. The Department of
Energy is offering up a total of $33M for research in leveraging
AI/ML for nuclear fusion. China’s Navy appears to have launched a
naval mothership for aerial drones. China is also set to introduce
regulation on “deepfakes,” requiring users to give consent and
prohibiting the technology for fake news, among many other things.
Xiamen University and other researchers publish a
“multidisciplinary open peer review dataset” (MOPRD), aiming to
provide ways to automate the peer review process. Google executives
issue a “code red” for Google’s search business over the success of
OpenAI’s ChatGPT. New York City schools have blocked access for
students and teachers to ChatGPT unless it involves the study of
the technology itself. Microsoft plans to launch a version of Bing
that integrates ChatGPT to its answers. And the International
Conference on Machine Learning bans authors from using AI tools
like ChatGPT to write scientific papers (though still allows the
use of such systems to “polish” writing). In February, an AI from
DoNotPay will likely be the first to represent a defendant in
court, telling the defendant what to say and when. In research, the
UCLA Departments of Psychology and Statistics demonstrate that
analogical reasoning can emerge from large language models such as
GPT-3, showing a strong capacity for abstract pattern induction.
Research from Google Research, Stanford, Chapel Hill, and DeepMind
shows that certain abilities only emerge from large language models
that have a certain number of parameters and a large enough
dataset. And finally, John H. Miller publishes Ex Machina through
the Santa Fe Institute Press, examining the topic of Coevolving
Machines and the Origins of the Social Universe.
https://www.cna.org/our-media/podcasts/ai-with-ai