Jan 10, 2020
In research, Andy and Dave discuss a new idea from Schmidhuber, which introduces Upside-Down reinforcement learning, where no value functions or policy search are necessary, essentially transforming reinforcement learning into a form of supervised learning. Research from OpenAI demonstrates a “double-descent” inherent in deep learning tasks, where performance initially gets worse and then gets better as the model increases in size. Tortoise Media provides yet-another-AI-index, but with a nifty GUI for exploration. August Cole explores a future conflict with Arctic Night. And Richard Feynman provides thoughts (from 1985) on whether machines will be able to think.
Twitter Throwdown: On 23 December, Yoshua Bengio and Gary Marcus will have a debate on the Best Way Forward for AI.