Regularization - a podcast by Ben Jaffe and Katie Malone

from 2016-10-03T02:13:50

:: ::

Lots of data is usually seen as a good thing. And it is a good thing--except when it's not. In a lot of fields, a problem arises when you have many, many features, especially if there's a somewhat smaller number of cases to learn from; supervised machine learning algorithms break, or learn spurious or un-interpretable patterns. What to do? Regularization can be one of your best friends here--it's a method that penalizes overly complex models, which keeps the dimensionality of your model under control.

Further episodes of Linear Digressions

Further podcasts by Ben Jaffe and Katie Malone

Website of Ben Jaffe and Katie Malone