示例代码
x = torch.randn(10, 24)
fc = nn.Linear(in_features=24, out_features=48)
y = fc(x)
print(y)
print(y.shape)
容器(Containers)
| 层 | 说明 |
| Module | Base class for all neural network modules. |
| Sequential | A sequential container. |
| ModuleList | Holds submodules in a list. |
| ModuleDict | Holds submodules in a dictionary. |
| ParameterList | Holds parameters in a list. |
| ParameterDict | Holds parameters in a dictionary. |
示例1
import torch
import torch.nn as nn
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 1000)
self.relu = nn.ReLU()
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(1000, 10)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.dropout(out)
out = self.fc2(out)
return out
model = LinearModel()
x = torch.randn(100, 784)
y = model(x)
print(y.shape)
示例2
import torch
import torch.nn as nn
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.seq = nn.Sequential(
nn.Linear(784, 1000), nn.ReLU(), nn.Dropout(), nn.Linear(1000, 10)
)
def forward(self, x):
out = self.seq(x)
return out
model = LinearModel()
x = torch.randn(100, 784)
y = model(x)
print(y.shape)
卷积层(Convolution Layers)
| 层 | 说明 |
| nn.Conv1d | Applies a 1D convolution over an input signal composed of several input planes. |
| nn.Conv2d | Applies a 2D convolution over an input signal composed of several input planes. |
| nn.Conv3d | Applies a 3D convolution over an input signal composed of several input planes. |
| nn.ConvTranspose1d | Applies a 1D transposed convolution operator over an input image composed of several input planes. |
| nn.ConvTranspose2d | Applies a 2D transposed convolution operator over an input image composed of several input planes. |
| nn.ConvTranspose3d | Applies a 3D transposed convolution operator over an input image composed of several input planes. |
| nn.LazyConv1d | A torch.nn.Conv1d module with lazy initialization of the in_channels argument. |
| nn.LazyConv2d | A torch.nn.Conv2d module with lazy initialization of the in_channels argument. |
| nn.LazyConv3d | A torch.nn.Conv3d module with lazy initialization of the in_channels argument. |
| nn.LazyConvTranspose1d | A torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument. |
| nn.LazyConvTranspose2d | A torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument. |
| nn.LazyConvTranspose3d | A torch.nn.ConvTranspose3d module with lazy initialization of the in_channels argument. |
| nn.Unfold | Extracts sliding local blocks from a batched input tensor. |
| nn.Fold | Combines an array of sliding local blocks into a large containing tensor. |
池化层(Pooling layers)
| 层 | 说明 |
| nn.MaxPool1d | Applies a 1D max pooling over an input signal composed of several input planes. |
| nn.MaxPool2d | Applies a 2D max pooling over an input signal composed of several input planes. |
| nn.MaxPool3d | Applies a 3D max pooling over an input signal composed of several input planes. |
| nn.MaxUnpool1d | Computes a partial inverse of MaxPool1d. |
| nn.MaxUnpool2d | Computes a partial inverse of MaxPool2d. |
| nn.MaxUnpool3d | Computes a partial inverse of MaxPool3d. |
| nn.AvgPool1d | Applies a 1D average pooling over an input signal composed of several input planes. |
| nn.AvgPool2d | Applies a 2D average pooling over an input signal composed of several input planes. |
| nn.AvgPool3d | Applies a 3D average pooling over an input signal composed of several input planes. |
| nn.FractionalMaxPool2d | Applies a 2D fractional max pooling over an input signal composed of several input planes. |
| nn.FractionalMaxPool3d | Applies a 3D fractional max pooling over an input signal composed of several input planes. |
| nn.LPPool1d | Applies a 1D power-average pooling over an input signal composed of several input planes. |
| nn.LPPool2d | Applies a 2D power-average pooling over an input signal composed of several input planes. |
| nn.LPPool3d | Applies a 3D power-average pooling over an input signal composed of several input planes. |
| nn.AdaptiveMaxPool1d | Applies a 1D adaptive max pooling over an input signal composed of several input planes. |
| nn.AdaptiveMaxPool2d | Applies a 2D adaptive max pooling over an input signal composed of several input planes. |
| nn.AdaptiveMaxPool3d | Applies a 3D adaptive max pooling over an input signal composed of several input planes. |
| nn.AdaptiveAvgPool1d | Applies a 1D adaptive average pooling over an input signal composed of several input planes. |
| nn.AdaptiveAvgPool2d | Applies a 2D adaptive average pooling over an input signal composed of several input planes. |
| nn.AdaptiveAvgPool3d | Applies a 3D adaptive average pooling over an input signal composed of several input planes. |
填充层(Padding Layers)
| 层 | 说明 |
| nn.ReflectionPad1d | Pads the input tensor using the reflection of the input boundary. |
| nn.ReflectionPad2d | Pads the input tensor using the reflection of the input boundary. |
| nn.ReflectionPad3d | Pads the input tensor using the reflection of the input boundary. |
| nn.ReplicationPad1d | Pads the input tensor using replication of the input boundary. |
| nn.ReplicationPad2d | Pads the input tensor using replication of the input boundary. |
| nn.ReplicationPad3d | Pads the input tensor using replication of the input boundary. |
| nn.ZeroPad1d | Pads the input tensor boundaries with zero. |
| nn.ZeroPad2d | Pads the input tensor boundaries with zero. |
| nn.ZeroPad3d | Pads the input tensor boundaries with zero. |
| nn.ConstantPad1d | Pads the input tensor boundaries with a constant value. |
| nn.ConstantPad2d | Pads the input tensor boundaries with a constant value. |
| nn.ConstantPad3d | Pads the input tensor boundaries with a constant value. |
| nn.CircularPad1d | Pads the input tensor using circular padding of the input boundary. |
| nn.CircularPad2d | Pads the input tensor using circular padding of the input boundary. |
| nn.CircularPad3d | Pads the input tensor using circular padding of the input boundary. |
正则化层(Normalization Layers)
| 层 | 说明 |
| nn.BatchNorm1d | Applies Batch Normalization over a 2D or 3D input. |
| nn.BatchNorm2d | Applies Batch Normalization over a 4D input. |
| nn.BatchNorm3d | Applies Batch Normalization over a 5D input. |
| nn.LazyBatchNorm1d | A torch.nn.BatchNorm1d module with lazy initialization. |
| nn.LazyBatchNorm2d | A torch.nn.BatchNorm2d module with lazy initialization. |
| nn.LazyBatchNorm3d | A torch.nn.BatchNorm3d module with lazy initialization. |
| nn.GroupNorm | Applies Group Normalization over a mini-batch of inputs. |
| nn.SyncBatchNorm | Applies Batch Normalization over a N-Dimensional input. |
| nn.InstanceNorm1d | Applies Instance Normalization. |
| nn.InstanceNorm2d | Applies Instance Normalization. |
| nn.InstanceNorm3d | Applies Instance Normalization. |
| nn.LazyInstanceNorm1d | A torch.nn.InstanceNorm1d module with lazy initialization of the num_features argument. |
| nn.LazyInstanceNorm2d | A torch.nn.InstanceNorm2d module with lazy initialization of the num_features argument. |
| nn.LazyInstanceNorm3d | A torch.nn.InstanceNorm3d module with lazy initialization of the num_features argument. |
| nn.LayerNorm | Applies Layer Normalization over a mini-batch of inputs. |
| nn.LocalResponseNorm | Applies local response normalization over an input signal. |
| nn.RMSNorm | Applies Root Mean Square Layer Normalization over a mini-batch of inputs. |
循环层(Recurrent Layers)
| 层 | 说明 |
| nn.RNNBase | Base class for RNN modules (RNN, LSTM, GRU). |
| nn.RNN | Apply a multi-layer Elman RNN with tanhtanh or ReLUReLU non-linearity to an input sequence. |
| nn.LSTM | Apply a multi-layer long short-term memory (LSTM) RNN to an input sequence. |
| nn.GRU | Apply a multi-layer gated recurrent unit (GRU) RNN to an input sequence. |
| nn.RNNCell | An Elman RNN cell with tanh or ReLU non-linearity. |
| nn.LSTMCell | A long short-term memory (LSTM) cell. |
| nn.GRUCell | A gated recurrent unit (GRU) cell. |
Transformer层(Transformer Layers)
| 层 | 说明 |
| nn.Transformer | A transformer model. |
| nn.TransformerEncoder | TransformerEncoder is a stack of N encoder layers. |
| nn.TransformerDecoder | TransformerDecoder is a stack of N decoder layers. |
| nn.TransformerEncoderLayer | TransformerEncoderLayer is made up of self-attn and feedforward network. |
| nn.TransformerDecoderLayer | TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. |
线性层(Linear Layers)
| 层 | 说明 |
| nn.Identity | A placeholder identity operator that is argument-insensitive. |
| nn.Linear | Applies an affine linear transformation to the incoming data: y=xA^T+b, //y//=//xA^T//+//b//. |
| nn.Bilinear | Applies a bilinear transformation to the incoming data: y=x1^TAx2+b, //y//=//x//1^//T////Ax//2+//b//. |
| nn.LazyLinear | A torch.nn.Linear module where //in_features// is inferred. |
Dropout层(Dropout Layers)
| 层 | 说明 |
| nn.Dropout | During training, randomly zeroes some of the elements of the input tensor with probability p. |
| nn.Dropout1d | Randomly zero out entire channels. |
| nn.Dropout2d | Randomly zero out entire channels. |
| nn.Dropout3d | Randomly zero out entire channels. |
| nn.AlphaDropout | Applies Alpha Dropout over the input. |
| nn.FeatureAlphaDropout | Randomly masks out entire channels. |
稀疏层(Sparse Layers)
| 层 | 说明 |
| nn.Embedding | A simple lookup table that stores embeddings of a fixed dictionary and size. |
| nn.EmbeddingBag | Compute sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings. |
视觉层(Vision Layers)
| 层 | 说明 |
| nn.PixelShuffle | Rearrange elements in a tensor according to an upscaling factor. |
| nn.PixelUnshuffle | Reverse the PixelShuffle operation. |
| nn.Upsample | Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data. |
| nn.UpsamplingNearest2d | Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels. |
| nn.UpsamplingBilinear2d | Applies a 2D bilinear upsampling to an input signal composed of several input channels. |
Shuffle Layers
| 层 | 说明 |
| nn.ChannelShuffle | Divides and rearranges the channels in a tensor. |
DataParallel Layers (multi-GPU, distributed)
| 层 | 说明 |
| nn.DataParallel | Implements data parallelism at the module level. |
| nn.parallel.DistributedDataParallel | Implement distributed data parallelism based on torch.distributed at module level. |
本文由人工编写,AI优化,转载请注明原文地址:
PyTorch中的层简介