Member-only story

Day 5: 2020.04.16
Paper: Reducing the Dimensionality of Data with Neural Networks
Category: Model/Belief Net/Deep Learning

This is also a classic in Deep Learning. Like the one I read yesterday, provides an insight on neural network pre-training.

Training a deep network is hard:

  • Large initial weight -> typically find poor local minima
  • Small initial weight -> gradient in early layer are too tiny -> infeasible to train network with many hidden layers, i.e. deep network

In this paper, the focus is on deep autoencoder network for dimensionality reduction.

By using the “pre-training” approach describe in the paper, training on deep autoencoder network becomes feasible, and have much better performance than typical PCA.

--

--

Chun-kit Ho
Chun-kit Ho

Written by Chun-kit Ho

cloud architect@ey | full-stack software engineer | social innovation | certified professional solutions architect in aws & gcp

No responses yet