📜  nn.dropout - Python (1)

📅  最后修改于: 2023-12-03 15:17:53.155000             🧑  作者: Mango

nn.dropout - Python

The nn.dropout module in PyTorch provides a method to randomly zero some of the elements of the input tensor with a specified probability during training. This helps in preventing overfitting and improving generalization performance of the deep learning models.

Syntax
torch.nn.Dropout(p: float = 0.5, inplace: bool = False)
Parameters
  • p: probability of an element to be zeroed. Default value is 0.5.
  • inplace: If set to True, will do this operation in-place. Default value is False.
Usage

Example usage of nn.dropout:

import torch.nn as nn

# Define a model
model = nn.Sequential(
    nn.Linear(20, 512),
    nn.Dropout(0.2), # randomly set 20% of elements to zero
    nn.ReLU(inplace=True),
    nn.Linear(512, 10),
)

# Forward pass
x = torch.randn(1, 20)
output = model(x)

Benefits

Using nn.dropout during training can provide several benefits such as:

  • Helps in avoiding overfitting by randomly dropping some features during training
  • Improves the generalization performance of the model
  • Helps in identifying important features and reduces the complexity of the network

Overall, nn.dropout is a powerful tool for improving the performance and accuracy of deep learning models.