Optimizer

Trait Optimizer 

Source
pub trait Optimizer<B>
where B: Backend,
{ // Required methods fn step(&mut self, grads: &GradStore<B>) -> Result<Vec<Tensor<B>>, Error>; fn learning_rate(&self) -> f64; fn set_learning_rate(&mut self, lr: f64); }
Expand description

Trait that all optimizers implement.

Optimizers update model parameters given their gradients.

§Type Parameters

  • B: the compute backend

Required Methods§

Source

fn step(&mut self, grads: &GradStore<B>) -> Result<Vec<Tensor<B>>, Error>

Perform one optimization step.

Given the current parameters and their gradients (from backward()), compute and return the updated parameter values.

Returns a vector of updated parameters in the same order as the parameters passed to the optimizer’s constructor.

Source

fn learning_rate(&self) -> f64

Return the current learning rate.

Source

fn set_learning_rate(&mut self, lr: f64)

Set a new learning rate (for learning rate scheduling).

Implementors§

Source§

impl<B> Optimizer<B> for Adam<B>
where B: Backend,

Source§

impl<B> Optimizer<B> for AdamW<B>
where B: Backend,

Source§

impl<B> Optimizer<B> for RAdam<B>
where B: Backend,

Source§

impl<B> Optimizer<B> for RMSProp<B>
where B: Backend,

Source§

impl<B> Optimizer<B> for SGD<B>
where B: Backend,