WandaSparsifier¶
- class torchao.sparsity.WandaSparsifier(sparsity_level: float = 0.5, semi_structured_block_size: Optional[int] = None)[源代码]¶
Wanda 稀疏化器
Wanda(权重和激活剪枝)是一种激活感知的剪枝方法,首次提出于 https://arxiv.org/abs/2306.11695。该稀疏化器根据输入激活的范数与权重幅度的乘积来移除权重。
该稀疏化器由三个变量控制:1. sparsity_level 定义了将被归零的稀疏块的数量;
- 参数:
sparsity_level – 目标稀疏度级别;
model – 要进行稀疏化的模型;
- prepare(model: Module, config: List[Dict]) None [源代码]¶
通过添加参数化来准备模型。
注意
The model is modified inplace. If you need to preserve the original model, use copy.deepcopy.
- squash_mask(params_to_keep: Optional[Tuple[str, ...]] = None, params_to_keep_per_layer: Optional[Dict[str, Tuple[str, ...]]] = None, *args, **kwargs)[源代码]¶
将稀疏掩码压缩到相应的张量中。
如果设置了 params_to_keep 或 params_to_keep_per_layer,则模块将附加一个 sparse_params 字典。
- 参数:
params_to_keep – 要在模块中保存的键列表,或者表示将保存稀疏性参数的模块和键的字典
params_to_keep_per_layer – 指定特定层应保存的参数的字典。字典中的键应为模块的完全限定名(fqn),而值应为一个字符串列表,其中包含要在 sparse_params 中保存的变量名
示例
>>> # xdoctest: +SKIP("locals are undefined") >>> # Don't save any sparse params >>> sparsifier.squash_mask() >>> hasattr(model.submodule1, 'sparse_params') False
>>> # Keep sparse params per layer >>> sparsifier.squash_mask( ... params_to_keep_per_layer={ ... 'submodule1.linear1': ('foo', 'bar'), ... 'submodule2.linear42': ('baz',) ... }) >>> print(model.submodule1.linear1.sparse_params) {'foo': 42, 'bar': 24} >>> print(model.submodule2.linear42.sparse_params) {'baz': 0.1}
>>> # Keep sparse params for all layers >>> sparsifier.squash_mask(params_to_keep=('foo', 'bar')) >>> print(model.submodule1.linear1.sparse_params) {'foo': 42, 'bar': 24} >>> print(model.submodule2.linear42.sparse_params) {'foo': 42, 'bar': 24}
>>> # Keep some sparse params for all layers, and specific ones for >>> # some other layers >>> sparsifier.squash_mask( ... params_to_keep=('foo', 'bar'), ... params_to_keep_per_layer={ ... 'submodule2.linear42': ('baz',) ... }) >>> print(model.submodule1.linear1.sparse_params) {'foo': 42, 'bar': 24} >>> print(model.submodule2.linear42.sparse_params) {'foo': 42, 'bar': 24, 'baz': 0.1}