Training¶
The training process for the autoencoder involves optimizing the autoencoder to minimize reconstruction error on the input data.
Steps¶
- Data Preparation: Load and preprocess the MNIST dataset.
- Model Initialization: Instantiate the LitAutoEncoder model with encoder and decoder.
- Training Loop: Use PyTorch Lightning's Trainer to handle the training process.
Example¶
from uv_datascience_project_template.train_autoencoder import train
# Train the autoencoder
train(data_loader, epochs=10, learning_rate=0.001)
Training API¶
uv_datascience_project_template.train_autoencoder
¶
train_litautoencoder(settings)
¶
Trains a LitAutoEncoder model on the MNIST dataset and returns the trained encoder, decoder, a flag indicating training completion, and the checkpoint path.
PARAMETER | DESCRIPTION |
---|---|
settings
|
The settings object containing model, training, and data configurations.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
tuple[Sequential, Sequential, Literal[True], str]
|
tuple[Sequential, Sequential, Literal[True], str]: A tuple containing the trained encoder, decoder, a boolean flag indicating that the model has been successfully trained, and the path to the saved model checkpoint. |
Source code in src/uv_datascience_project_template/train_autoencoder.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|