LeNet
LeNet, also called LeNet-5, is a simple convolutional neural network. LeNet-5 is the most popular LeNet network.
You can find the variants of the LeNet network on this page.
Network features:
- 61,706 parameters
- 5 layers (3 convolutional + 2 fully connected)
Architecture
Parameters
Layer | Activation Shape | Filter Shape | Weights | Biases | Parameters |
---|---|---|---|---|---|
Input | 1x32x32 | - | 0 | 0 | 0 |
Conv1 | 6x28x28 | 5x5 | 150 | 6 | 156 |
Pool1 | 6x14x14 | - | 0 | 0 | 0 |
Conv2 | 16x10x10 | 5x5 | 2,400 | 16 | 2,416 |
Pool2 | 16x5x5 | - | 0 | 0 | 0 |
Conv3 | 120x1x1 | 5x5 | 48,000 | 120 | 48,120 |
FC1 | 84x1x1 | - | 10,080 | 84 | 10,164 |
FC2 | 10x1x1 | - | 840 | 10 | 850 |
Total | 61,706 |
See [2] for explanations about parameters calculations.
Last Convolutional Layer
In some representations of the LeNet architecture, we can see that there are 2 convolutional layers and 3 fully connected layers. But in the paper [1] the authors explained that layer C5 (the last convolutional layer in the architecture representation on this web page) is convolutional and not fully connected because if the size of the input image is bigger with everything else kept constant, then the feature map dimension will be larger than 1x1.
Fully Connected Layer with 84 units
In their paper [1], the authors explained the choice of 84 units for the first fully connected layer (named F6 in the paper). The purpose of this layer is to “represent a stylized image of the corresponding character class drawn on a 7x12 bitmap (hence the number 84). Such a representation is not particularly usefull for recognizing isolated digits, but it is quite usefull for recognizing strings of characters taken from the full printable ASCII set” [1]. Combined with a linguistic post-processor, confusion between similar ASCII character representations (e.g. 1 and I), can be corrected.