pub fn nll_loss<B>(
log_probs: &Tensor<B>,
targets: &Tensor<B>,
) -> Result<Tensor<B>, Error>where
B: Backend,Expand description
Negative Log-Likelihood Loss with integer class indices.
Computes: -mean(log_probs[i, target[i]]) for each sample i.
§Arguments
log_probs: log-probabilities [batch, num_classes] (output of log_softmax)targets: class indices as f64 [batch] — each value in 0..num_classes
Typically used as: nll_loss(&logits.log_softmax(1)?, &targets)
Note: unlike cross_entropy_loss which takes one-hot targets, this takes integer class indices (more memory efficient).