pub struct QuantizedLinear<B: Backend> {
pub in_features: usize,
pub out_features: usize,
/* private fields */
}Expand description
A quantized linear layer that stores weights in INT8/INT4.
During inference, weights are dequantized on-the-fly for computation. This saves memory (2-8× depending on bit-width) at the cost of a small dequantization overhead.
§Example
ⓘ
// Quantize a trained linear layer
let linear = Linear::new(256, 10, true, DType::F32, &dev)?;
// ... train ...
let qlinear = QuantizedLinear::from_linear(&linear, &QuantConfig::int8())?;
let output = qlinear.forward(&input)?;Fields§
§in_features: usizeInput features.
out_features: usizeOutput features.
Implementations§
Source§impl<B: Backend> QuantizedLinear<B>
impl<B: Backend> QuantizedLinear<B>
Sourcepub fn from_linear(linear: &Linear<B>, config: &QuantConfig) -> Result<Self>
pub fn from_linear(linear: &Linear<B>, config: &QuantConfig) -> Result<Self>
Create a QuantizedLinear from a trained Linear layer.
Sourcepub fn new(
weight_q: QuantizedTensor,
bias: Option<Tensor<B>>,
device: B::Device,
) -> Self
pub fn new( weight_q: QuantizedTensor, bias: Option<Tensor<B>>, device: B::Device, ) -> Self
Create from raw quantized data and optional bias tensor.
Sourcepub fn weight_quantized(&self) -> &QuantizedTensor
pub fn weight_quantized(&self) -> &QuantizedTensor
Get the quantized weight data.
Sourcepub fn memory_savings_bytes(&self) -> usize
pub fn memory_savings_bytes(&self) -> usize
Memory saved compared to FP32 weight storage.
Trait Implementations§
Source§impl<B: Backend> Debug for QuantizedLinear<B>
impl<B: Backend> Debug for QuantizedLinear<B>
Source§impl<B: Backend> Module<B> for QuantizedLinear<B>
impl<B: Backend> Module<B> for QuantizedLinear<B>
Source§fn forward(&self, x: &Tensor<B>) -> Result<Tensor<B>>
fn forward(&self, x: &Tensor<B>) -> Result<Tensor<B>>
Compute the output tensor from the input tensor.
This defines the layer’s computation (forward pass).
Source§fn parameters(&self) -> Vec<Tensor<B>>
fn parameters(&self) -> Vec<Tensor<B>>
Return all trainable parameters of this module.
The optimizer uses these to update weights during training.
Source§fn set_training(&self, _training: bool)
fn set_training(&self, _training: bool)
Set training or evaluation mode. Read more
Source§fn is_training(&self) -> bool
fn is_training(&self) -> bool
Whether the module is in training mode (default: true).
Source§fn num_parameters(&self) -> usize
fn num_parameters(&self) -> usize
Total number of scalar parameters in this module.
Source§fn trainable_params_count(&self) -> usize
fn trainable_params_count(&self) -> usize
Number of trainable (variable) parameters.
Source§fn frozen_parameters(&self) -> Vec<Tensor<B>>
fn frozen_parameters(&self) -> Vec<Tensor<B>>
Freeze all parameters: returns new parameter tensors with
is_variable = false, preventing gradient accumulation. Read moreAuto Trait Implementations§
impl<B> Freeze for QuantizedLinear<B>
impl<B> RefUnwindSafe for QuantizedLinear<B>
impl<B> Send for QuantizedLinear<B>
impl<B> Sync for QuantizedLinear<B>
impl<B> Unpin for QuantizedLinear<B>
impl<B> UnwindSafe for QuantizedLinear<B>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more