快捷方式

CosineAnnealingLR

class torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0.0, last_epoch=-1, verbose='deprecated')[source][source]

使用余弦退火计划设置每个参数组的学习率。

ηmax\eta_{max} 被设置为初始学习率,TcurT_{cur} 是 SGDR 中自上次重启以来的 epoch 数

ηt=ηmin+12(ηmaxηmin)(1+cos(TcurTmaxπ)),Tcur(2k+1)Tmax;ηt+1=ηt+12(ηmaxηmin)(1cos(1Tmaxπ)),Tcur=(2k+1)Tmax.\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}

当 last_epoch=-1 时,将初始学习率设置为 lr。请注意,由于该 schedule 是递归定义的,学习率可以同时被此 scheduler 之外的其他操作符修改。如果学习率仅由此 scheduler 设置,则每个步骤的学习率变为

ηt=ηmin+12(ηmaxηmin)(1+cos(TcurTmaxπ))\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)

It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts.

Parameters
  • optimizer (Optimizer) – Wrapped optimizer.

  • T_max (int) – Maximum number of iterations.

  • eta_min (float) – Minimum learning rate. Default: 0.

  • last_epoch (int) – The index of last epoch. Default: -1.

  • verbose (bool | str) –

    If True, prints a message to stdout for each update. Default: False.

    Deprecated since version 2.2: verbose is deprecated. Please use get_last_lr() to access the learning rate.

get_last_lr()[source]

Return last computed learning rate by current scheduler.

Return type

List[float]

get_lr()[source][source]

Retrieve the learning rate of each parameter group.

load_state_dict(state_dict)[source]

Load the scheduler’s state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)[source]

Display the current learning rate.

Deprecated since version 2.4: print_lr() is deprecated. Please use get_last_lr() to access the learning rate.

state_dict()[source]

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch=None)[source]

Perform a step.

文档

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources