Oct 19, 2018
Andy and Dave discuss the “Transparency by Design Network” (TbD-net), research from MIT Lincoln Lab that uses a collection of modular neural nets to perform specific image identification subtasks. The resulting output places heat-map blobs over objects in an image, which allows a human analyst to see how a module is interpreting the image (and to use that information to further improve the model’s accuracy). In research from DeepMind and the University of Oxford, researchers attempt to solve the problem that neural nets have in not manipulating numerical information well outside of the range of values encountered during training. Researchers created a Neural Accumulator and a Neural Arithmetic Logic Unit (in essence, representing numerical quantities as individual neurons without a nonlinearity) to allow a system to learn to represent and manipulate numbers in a systematic way. Georgia Tech has developed a machine learning-based method to automate the generation of novel video games, using Super Mario Bros, Mega Man, and Kirby’s Adventure as inputs. And Kate Crawford and Vladan Joler have created a massive visualization of the many processes that make an Amazon Echo work, in the “Anatomy of an AI system.” DARPA celebrates its 60th anniversary with a 184-page paper that highlights its research over the last 60 years; Google launches a “What-If Tool” for probing datasets at a non-coding level; Neural Networks and Learning Machines (3rd Edition) by Simon Haykin is available for free. Robin R. Murphy curates information on “Robotics Through Science Fiction” (and more); all of the keynotes and presentations from the Joint Multi-Conference on Human-Level Artificial Intelligence are available online, likely requiring a week of vacation to view them all; and the 11th International Conference on Swarm Intelligence will be in Rome at the end of October 2018.