Episode 21: Additional optimisation strategies for deep learning - a podcast by Francesco Gadaleta

from 2017-09-18T14:23:16

:: ::

In the last episode How to master optimisation in deep learningĀ I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning.
I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.


Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute.


In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.

Further episodes of Data Science at Home

Further podcasts by Francesco Gadaleta

Website of Francesco Gadaleta