crumpets.torch.policy module

class crumpets.torch.policy.NoopPolicy[source]

Bases: object

Just a noop Policy. Use it when you don’t want to modify the lr

step(*args, **kwargs)[source]
class crumpets.torch.policy.PolyPolicy(*args: Any, **kwargs: Any)[source]

Bases: torch.optim.lr_scheduler.

A policy that can be described as a polynomial.

Parameters
  • optimizer – an optimizer object

  • num_epochs – the number of epochs that this policy is defined for. Don’t use it longer than that, because this might cause unexpected behaviour

  • power – power value

  • last_epoch – The current state of the policy. This can be used to set the initial state of the policy for instance to change the policy during training.

get_lr()[source]
class crumpets.torch.policy.RampPolicy(*args: Any, **kwargs: Any)[source]

Bases: torch.optim.lr_scheduler.

This Policy increases the learning rate step by step

Parameters
  • optimizer – an optimizer object

  • ramp_epochs – the value where the plateau is reached

  • last_epoch – The current state of the policy. This can be used to set the initial state of the policy for instance to change the policy during training.

get_lr()[source]
step(epoch=None, metrics=None)[source]
class crumpets.torch.policy.ReduceLROnPlateau(*args: Any, **kwargs: Any)[source]

Bases: torch.optim.lr_scheduler.

A policy that reduces the learning rate when the training progress reaches a plateau. It inherits from torch.optim.lr_scheduler.ReduceLROnPlateau and because of that shares the same interface

step(epoch=None, metrics=None)[source]
class crumpets.torch.policy.SigmoidPolicy(*args: Any, **kwargs: Any)[source]

Bases: torch.optim.lr_scheduler.

A policy that can be described as a sigmoid. It can be described using the formula base_lr / (1 + math.exp(self.q * x), where x is last_epoch/num_epochs - 1

Parameters
  • optimizer – an optimizer object

  • num_epochs – the number of epochs that this policy is defined for. Don’t use it longer than that, because this might cause unexpected behaviour

  • q – q value to describe the behaviour of the policy.

  • last_epoch – The current state of the policy. This can be used to set the initial state of the policy for instance to change the policy during training.

get_lr()[source]