📜  ML – 深度学习层列表

📅  最后修改于: 2022-05-13 01:58:09.118000             🧑  作者: Mango

ML – 深度学习层列表

要指定所有层顺序连接的神经网络的架构,请直接创建一个层数组。要指定层可以具有多个输入或输出的网络架构,请使用 LayerGraph 对象。使用以下函数创建不同的图层类型。

输入层:

FUNCTIONDESCRIPTION
imageInputLayer
  • Inputs images to a network
  • Applies data normalization

    .

  • sequenceInputLayer
  • Inputs sequence data to a network.
  • 可学习层:

    FUNCTIONDESCRIPTION
    convolution2dLayer
  • Applies sliding filters to the input.
  • It convolves the input by moving the filters along the input vertically and horizontally and computing the dot product of the weights and the input, and then adding a bias term.
  • transposedConv2dLayer
  • It upsamples feature maps.
  • fullyConnectedLayer
  • Multiplies the input by a weight matrix and then adds a bias vector

    .

  • lstmLayer
  • It is a recurrent neural network (RNN) layer that enables support for time series and sequence data in a network.
  • It performs additive interactions, which can help improve gradient flow over long sequences during training.
  • They are best suited for learning long-term dependencies

    .

  • 激活层:

    FUNCTIONDESCRIPTION
    reluLayer
  • It performs a threshold operation to each element of the input, where any value less than zero is set to zero.
  • leakyReluLayer
  • It performs a simple threshold operation, where any input value less than zero is multiplied by a fixed scalar
  • clippedReluLayer
  • It performs a simple threshold operation, where any input value less than zero is set to zero.
  • Any value above the clipping ceiling is set to that clipping ceiling.
  • 归一化和 Dropout 层:



    FUNCTIONDESCRIPTION
    batchNormalizationLayer
  • It normalizes each input channel across a mini-batch.
  • The layer first normalizes the activations of each channel by subtracting the mini-batch mean and dividing by the mini-batch standard deviation.
  • Then, the layer shifts the input by a learnable offset and scales it by a learnable scale factor.
  • Use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers, to speed up training of convolutional neural networks and reduce the sensitivity to network initialization.
  • crossChannelNormalizationLayer
  • It carries out channel-wise normalization.
  • dropoutLayer
  • It randomly sets input elements to zero with a given probability.
  • 池化层:

    FUNCTIONDESCRIPTION
    averagePooling2dLayer
  • It performs down sampling by dividing the input into rectangular pooling regions and computing the average values of each region.
  • maxPooling2dLayer
  • It performs down sampling by dividing the input into rectangular pooling regions, and computing the maximum of each region.
  • maxUnpooling2dLayer
  • It unpools the output of a max pooling layer.
  • 组合层:

    FUNCTIONDESCRIPTION
    additionLayer
  • It adds multiple inputs element-wise.
  • Specify the number of inputs to the layer when you create it.
  • The inputs have names ‘in1’, ‘in2’, …, ‘inN’, where N is the number of inputs.
  • Use the input names when connecting or disconnecting the layer to other layers using connectLayers or disconnectLayers.
  • All inputs to an addition layer must have the same dimension.
  • depthConcatenationLayer
  • It takes multiple inputs that have the same height and width.
  • It concatenates them along the third dimension.
  • 输出层:

    FUNCTIONDESCRIPTION
    softmaxLayer
  • It applies a softmax function to the input.
  • classificationLayer
  • It holds the name of the loss function the software uses for training the network for multiclass classification

    .

  • regressionLayer
  • It holds the name of the loss function the software uses for training the network for regression, and the response names.