Home » Keras layers

Keras layers

Keras encompasses a wide range of predefined layers as well as it permits you to create your own layer. It acts as a major building block while building a Keras model. In Keras, whenever each layer receives an input, it performs some computations that result in transformed information. The output of one layer is fed as input to the other layer.

Keras Core layer comprises of a dense layer, which is a dot product plus bias, an activation layer that transfers a function or neuron shape, a dropout layer, which randomly at each training update, sets a fraction of input unit to zero so as to avoid the issue of overfitting, a lambda layer that wraps an arbitrary expression just like an object of a Layer, etc.

The Keras convolution layer utilizes filters for the creation of a feature map, runs from 1D to 3D. It includes most of the common invariants, for example, cropping and transposed convolution layer for each dimension. The 2D convolution is used for image recognition as it is inspired by the visual cortex.

The downscaling layer, which is mainly known as pooling, runs from 1D to 3D. It also includes the most common variants, such as max and average pooling. The layers that are locally connected act as convolution layer, just the fact that weights remain unshared. The noise layer eradicates the issue of overfitting. The recurrent layer that includes simple, gated, LSTM, etc. are implemented in applications like language processing.

Following are the number of common methods that each Keras layer have:

  • get_weights(): It yields the layer’s weights as a numpy arrays list.
  • set_weights(weights): It sets the layer’s weight with the similar shapes as that of the output of get_weights() from numpy arrays list.
  • get_config(): It returns a dictionary that includes the layer’s configuration, so as to instantiate from its config through;

Alternatively,

In case when layer isn’t the shared layer or we can say the layer comprises of individual node, then we can get its input tensor, output tensor, input shape and output shape through the followings;

  • input
  • output
  • input_shape
  • output_shape

Else, if the layer encompasses several nodes then in that case you can use the following methods given below;

  • get_input_at(node_index)
  • get_output_at(node_index)
  • get_input_shape_at(node_index)
  • get_output_shape_at(node_index)

Core Layer

Convolution Layer

Pooling Layer

Locally Connected Layer

RNN Layer

Noise Layer

Layer Wrapper

Normalization Layer

Embedding Layer

Advanced Activation Layer


Next TopicKeras Model class

You may also like