Compressing deep learning models: distillation (Ep.104) - a podcast by Francesco Gadaleta

from 2020-05-20T08:04:10

:: ::

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.


In this episode I explain one of the first methods: knowledge distillation


 Come join us on Slack


Reference

Further episodes of Data Science at Home

Further podcasts by Francesco Gadaleta

Website of Francesco Gadaleta