Page 265 - Ai Book - 10
P. 265

Activation map. With the help of feature map, we can do the following:

             •  Reducing the image size to make processing faster and efficient,
             •  Focus on several features that can help us in processing the image.
             For example, biometrics or smartphones gather important information by recognising facial features like
             eyes, nose and mouth instead of seeing the whole face.

         u   Rectified Linear Unit Function (ReLu): Rectified Linear Unit Function is the next layer of CNN after Convolution
             Layer. As you know, feature map could be extracted from the Convolution layer which is then passed on to
             the ReLU layer. The basic function of this layer is to remove all the negative numbers that exist in a feature
             map. In other words, this layer introduces the concept of non-linearity function in a feature map.
             ReLU is a non-linear activation function that can be commonly used in deep neural networks. This function
             can be represented as:
                       f(x) = max(0,x)........................where, x= value if input
             Here, you can see that the output of ReLu is maximum between the two values i.e., zero and input value. An
             output is equal to zero when the input value is negative , else it is positive. Let us understand the concept of
             negative and positive value with the help of an example. Suppose you have a 3×3 matrix of an input image:

                                                 –2             5             –2

                                                 0              3             –3
                                                 1              4             0

             Here, you have seen that negative values exist in the matrix. These values can be removed by the ReLU layer.
             After removing these values, output matrix will be:

                                                 0              5             0

                                                 0              3             0
                                                 1              4             0

         u   Pooling Layer: The working procedure of Pooling layer is similar to the Convolutional Layer. Basically, Pooling
             layer is responsible for reducing the spatial size of the Convolved Feature while still retaining the important
             features.
             The Pooling layer plays an important role in CNNs because it can perform various kind of tasks like:
             a.  Makes the image smaller and manageable

             b.  Enhance  the resistant power  of image for small
             transformations, distortions and transitions.
             The two types of pooling which can be performed on an image
             are as follows:

         u   Max Pooling : Max Pooling computes the maximum value or
             the element from the portion of the image covered by the
             Kernel. Thus, the output of max pooling layer is a feature map
             that contains the most prominent features of  the previous
             feature map.

         u   Average Pooling: As  its name implies,  average  pooling
             computes the average of the elements present in the portion of the image covered by the kernel. Thus,
             average pooling produces the average of features present in a patch.

                                                                                                             139
                                                                                                             139
   260   261   262   263   264   265   266   267   268   269   270