Aug 2, 2019
Continuing in research, Andy and Dave discuss research from Imperial College and the Samsung AI Centre, which can take a single image of any face, and create realistic speech-driven facial animations, using a GAN. From the Conference on Computer Vision and Pattern Recognition, researchers create an algorithm that can learn individual styles of conversational gesture, and then produce plausible gestures to accompany other audio input. And research in Nature examines 3.3 million material-science abstracts with unsupervised word embeddings to capture “latent knowledge.” The survey paper of the week looks at the reproducibility of machine learning in health-related fields, and finds health consistently lags behind other subfield of machine learning. Safety First for Automated Driving identifies the guiding principles for autonomous cars to be safe, with input from 11 authors; among the information, the report finds that verification and validation of the systems is still lacking in the existing literature. The Berkman Klein Center at Harvard compiles an infographic on all of the published AI “principles” from governments, industry, and other organizations. The “classic paper” of the week comes from Alan Turing’s 1948 paper on “Intelligent Machinery.” The 36th International Conference on Machine Learning releases over 150 videos from its June session. CognitionX 2019 releases a video on managing security in an insecure world. Manlio de Domenico and Hiroki Sayama (and many others!) provide an interactive site for explaining and exploring complexity. Wendy Anderson and August Cole explore what war in the late 2020s might look like for the Secretary of Defense, in The Secretary of Hyperwar. And for click-bait of the week, astrophysicists get “baffled” by their simulation of the universe using AI.