Member-only story
ML Paper Challenge Day 9, 10 — Rethinking the Inception Architecture for Computer Vision
3 min readApr 21, 2020
Day 9 — 10: 2020.04.20–21
Paper: Rethinking the Inception Architecture for Computer Vision
Category: Model/CNN/Deep Learning/Image Recognition
This paper looks like an explanation of yesterday paper “Going deeper with convolutions” for scaling CNN in efficient ways.
It first layouts the general design principle of the network.
- Avoid representational bottlenecks, especially early in the network.
“One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the inputs to the outputs before reaching the final representation used for the task at hand.” - Higher dimensional representations are easier to process locally within a network.
“Increasing the activations per tile in a convolutional network allows for more disentangled features.” -> Train faster.