Member-only story

Day 9 — 10: 2020.04.20–21
Paper: Rethinking the Inception Architecture for Computer Vision
Category: Model/CNN/Deep Learning/Image Recognition

This paper looks like an explanation of yesterday paper “Going deeper with convolutions” for scaling CNN in efficient ways.

It first layouts the general design principle of the network.

  • Avoid representational bottlenecks, especially early in the network.
    One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the inputs to the outputs before reaching the final representation used for the task at hand.
  • Higher dimensional representations are easier to process locally within a network.
    Increasing the activations per tile in a convolutional network allows for more disentangled features.” -> Train faster.

--

--

Chun-kit Ho
Chun-kit Ho

Written by Chun-kit Ho

cloud architect@ey | full-stack software engineer | social innovation | certified professional solutions architect in aws & gcp

No responses yet