neural networks with google s tensorflow
play

Neural Networks with Googles TensorFlow Shuo Zhang Computational - PowerPoint PPT Presentation

Neural Networks with Googles TensorFlow Shuo Zhang Computational discourse analysis 11/22/16 Overview 1. Neural Networks basics 2. Neural Networks specifics 3. Neural Networks with Googes TensorFlow 4. Coreference: Singleton


  1. Neural Networks with Google’s TensorFlow Shuo Zhang Computational discourse analysis 11/22/16

  2. Overview 1. Neural Networks basics 2. Neural Networks specifics 3. Neural Networks with Googe’s TensorFlow 4. Coreference: Singleton classification example

  3. Resources • Deep learning course (Google) @ Udacity • Machine learning course (Stanford, Andrew Ng) @ coursera • Neural Network course (Geoffrey Hinton) @ coursera

  4. 1. NN basics

  5. From linear to non-linear classifier

  6. Pros and cons of linear models Pros: Cons: Conclusion: • Fast • Limited to modeling We want to use additive features parameters within linear • Numerically stable functions but able to • Multiplicative or higher efficiently do non-linear • Derivative is constant order features leas to mapping. huge parameter space, not suitable for non- linear mapping

  7. From logistic regression to neural networks

  8. Inserting a non-linear layer: Rectified Linear Unit(ReLU)

  9. Intuition: how NN makes non-linear mapping possible

  10. Type of neural network • Feed forward • Feedback • Self Organizing Map(SOM) • ..

  11. 2. NN specifics

  12. Multinomial logistic regression as the basic unit in NN

  13. Softmax – turn outputs of linear functions into probability vectors

  14. One-hot encoding

  15. Cross entropy – measuring similarity between prediction and gold label

  16. Putting it together again

  17. MLR to NN

  18. ReLU – a non-linear activation function to put in the hidden layer ReLU is one of many choices of a non-linear activation function. https://en.wikipedia.org/wiki/ Activation_function

  19. Training a neural network • Basically similar to training a linear model by optimizing a convex function using a method like gradient descent • Example cost function for logistic based activation

  20. Cost function – this is universal for linear classifier or NN • Cost function is a function of the parameters that captures the difference between predicted and gold label, therefore we want to minimize it. • How to minimize? Using gradient descent, at each iteration, adjust the weights. • How to adjust weights? Subtracting gradient (derivative) will move you toward the minimum.

  21. Gradient descent • Keep in mind that W is a matrix, so we need to compute partial derivative with respect to each element of W, and sum them up.

  22. Gradient Descent flavors • Batch GD: classic approach, summing over derivative for all training examples at each iteration in order to perform one update to weights, very slow, but more stable, almost never used today • Stochastic GD: only takes one example at each iteration and use the gradient computed from that example to adjust weights, fast, but less stable behavior • Mini-batch GD: (in between) takes a mini-batch of examples (such as from 100 to 2000) and sum up those terms derivatives to perform update, balance between stability and speed (also good results), most used today

  23. Neural Network training: forward backward propagation Intuition from linear classifier: Repeat: • Compute an output • Compute error • Adjust weights (my implementation in Octave)

  24. 3. Neural Networks with Googe’s TensorFlow https://www.youtube.com/watch?v=oZikw5k_2FM

  25. Setup https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html

  26. Get started

  27. Hyper parameter tuning (loss curve) • Number of hidden nodes • Learning rate • Batch size • Number of steps • Overfitting

  28. Google Udacity course example:notMNIST

  29. Example code for notMNIST dataset (Udacity) • https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ udacity (This set of ipython notebook is not only partial implementation, since it is meant to be an assignment to be completed. To view a complete implementation, refer to the .ipynb and html files I uploaded on the corpling server).

Recommend


More recommend