Conv2dNormActivation¶
- class torchvision.ops.Conv2dNormActivation(in_channels: int, out_channels: int, kernel_size: ~typing.Union[int, ~typing.Tuple[int, int]] = 3, stride: ~typing.Union[int, ~typing.Tuple[int, int]] = 1, padding: ~typing.Optional[~typing.Union[int, ~typing.Tuple[int, int], str]] = None, groups: int = 1, norm_layer: ~typing.Optional[~typing.Callable[[...], ~torch.nn.modules.module.Module]] = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, activation_layer: ~typing.Optional[~typing.Callable[[...], ~torch.nn.modules.module.Module]] = <class 'torch.nn.modules.activation.ReLU'>, dilation: ~typing.Union[int, ~typing.Tuple[int, int]] = 1, inplace: ~typing.Optional[bool] = True, bias: ~typing.Optional[bool] = None)[源代码]¶
用于 Convolution2d-Normalization-Activation 模块的可配置块。
- 参数:
in_channels (int) – 输入图像中的通道数
out_channels (int) – Convolution-Normalization-Activation 模块生成的通道数
kernel_size – (int, optional): 卷积核的大小。默认值:3
stride (int, optional) – 卷积的步幅。默认值:1
padding (int, tuple 或 str, optional) – 添加到输入所有四个侧面的填充。默认值:None,在这种情况下,它将被计算为
padding = (kernel_size - 1) // 2 * dilation
groups (int, optional) – 从输入通道到输出通道的阻塞连接数。默认值:1
norm_layer (Callable[..., torch.nn.Module], optional) – 将堆叠在卷积层之上的归一化层。如果为
None
,则不会使用此层。默认值:torch.nn.BatchNorm2d
activation_layer (Callable[..., torch.nn.Module], optional) – 激活函数,它将堆叠在归一化层之上(如果不是 None),否则堆叠在卷积层之上。如果为
None
,则不会使用此层。默认值:torch.nn.ReLU
dilation (int) – 内核元素之间的间距。默认值:1
inplace (bool) – 激活层的参数,可以选择就地执行操作。默认值
True
bias (bool, optional) – 是否在卷积层中使用偏置。默认情况下,如果
norm_layer 为 None
,则包含偏置。