Training¶
The training process for the autoencoder involves optimizing the autoencoder to minimize reconstruction error on the input data.
Steps¶
- Data Preparation: Load and preprocess the dataset.
- Model Initialization: Instantiate the LitAutoEncoder model.
- Training Loop: Use PyTorch Lightning's Trainer to handle the training process.
Example¶
from uv_datascience_project_template.train_autoencoder import train
# Train the autoencoder
train(data_loader, epochs=10, learning_rate=0.001)
Training API¶
uv_datascience_project_template.train_autoencoder
¶
train_litautoencoder()
¶
Trains a LitAutoEncoder model on the MNIST dataset and returns the trained encoder, decoder, and a flag indicating training completion.
RETURNS | DESCRIPTION |
---|---|
tuple[Sequential, Sequential, Literal[True]]
|
tuple[Sequential, Sequential, Literal[True]]: A tuple containing the trained encoder, decoder, and a boolean flag indicating that the model has been successfully trained. |
Source code in src/uv_datascience_project_template/train_autoencoder.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|