principles of neural network design
play

Principles of neural network design Francois Belletti, CS294 RISE - PowerPoint PPT Presentation

Principles of neural network design Francois Belletti, CS294 RISE Human brains as metaphors of statistical models Biological analogies Machine learning instantiations The visual cortex of mammals Deep convolutional neural networks


  1. Principles of neural network design Francois Belletti, CS294 RISE

  2. Human brains as metaphors of statistical models Biological analogies Machine learning instantiations The visual cortex of mammals Deep convolutional neural networks Multiple sensing channels Multimodal neural networks Memory and attention LSTMs and GRUs

  3. Neural Networks For Computer Vision

  4. Neural Networks in Computer Vision Neural networks for classification of handwritten digits

  5. Learning Mechanism: Correction of Mistakes Nature used a single tool to get to today’s success: mistake

  6. Modularity Is Back-Prop’s Perk for Software Eng. Back-propagation is a recursive algorithm

  7. Image Classification

  8. Successful Architecture In Computer Vision An example of a wide network: AlexNet

  9. Understanding What Happens Within A Deep NN Examining convolution filter banks Examining activations

  10. Determining A Neuron’s Speciality Images that triggered the highest activations of a neuron:

  11. Another Successful Architecture For CV “We need to go deeper”, Inception:

  12. State of the Art

  13. Recurrent Architectures

  14. Learning To Leverage Context Memory in Recurrent Architectures: LSTM (Long Short Term Memory Network) Input x, output y, context c (memory) y y y Memorization Output Forget gate gate gate c c c y y y Concatenation x x x t

  15. Other recurrent architectures Gated recurrent units:

  16. Why Is Context Important? In language, most grammars are not context free End-to-end translation, Alex Graves

  17. Context Is Also Important In Control Remembering what just happened is important for decision making

  18. Memory is necessary for localization Latest experiment in asynchronous deep RL: LSTMS for maze running Memory comes at a cost: a lot of RAM or VRAM is necessary

  19. Conclusion: the distributed brain

  20. Interaction is crucial in enabling AI

  21. Playing versus computers before beating humans

  22. Bootstrapping by interaction Why would two androids casually chat one with another?

  23. The distributed brain at the edge Distributed RL is reminiscent of the philosophical omega point of knowledge

  24. Multiple Input Neural Networks

  25. Multi Inputs For Inference Youtube Video Auto-encoding

  26. Softmax Multiple Input Control Fully connected layer Multiplexing Inputs Fully connected layer Concatenated output Relu Relu Relu Fully Max Max Max connected Conv Conv Conv layer Relu Relu Relu Fully Max Max Max connected Conv Conv Conv layer Front Rear Radar Odometry Camera Camera

  27. Multiplexing In The Human Brain

Recommend


More recommend