Sep 3, 2021
Andy and Dave discuss the latest in AI news, including an
overview of Tesla’s “AI Day,” which among other things, introduced
the Dojo supercomputers specialized for ML, the HydraNet single
deep-learning model architecture, and a “humanoid robot,” the Tesla
Bot. Researchers at Brown University introduce neurograins,
grain-of-salt-sized wireless neural sensors, for which they use
nearly 50 to record neural activity in a rodent. The Associated
Press reports on the flaws in ShotSpotter’s AI gunfire detection
system, and one case which used such evidence to send a man to jail
for almost a year before a judge dismissed the case. The Department
of the Navy releases its Science and Technology Strategy for
Intelligent Autonomous Systems (publicly available), including an
Execution Plan (available only through government channels). The
National AI Research Resource Task Force extends its deadline for
public comment in order to elicit more responses. The Group of
Governmental Experts on Certain Conventional Weapons holds its
first 2021 session for the discussion of lethal autonomous weapons
systems; their agenda has moved on to promoting a common
understanding and definition of LAWS. And Stanford’s Center for
Research on Foundation Models publishes a manifesto: On the
Opportunities and Risks of Foundation Models, seeking to establish
high level principles on massive models (such as GPT3) upon which
many other AI capabilities build. In research, Georgie Institute of
Technology, Cornell University, and IBM Research AI examine how the
“who” in Explainable AI (e.g., people with or without a background
in AI) shapes the perception of AI explanations. And Alvy Ray Smith
pens the book of the week, with A Biography of the Pixel, examining
the pixel as the “organizing principle of all pictures, from cave
paintings to Toy Story.”
Follow the link below to visit our website and explore the links mentioned in the episode.