Autoencoder¶
The LitAutoEncoder is a PyTorch Lightning module designed for unsupervised learning tasks. It consists of an encoder and a decoder network. The autoencoder's role in the project is to learn a compressed, dense representation of the input data (encoding) and then reconstruct the input data from this representation (decoding). This process helps in understanding the underlying structure of the data and is useful for tasks like anomaly detection, data denoising, and dimensionality reduction.
Key Features¶
- Encoder: Compresses input data into a latent representation.
- Decoder: Reconstructs the input data from the latent representation.
- Loss Function: Mean Squared Error (MSE) is used to measure reconstruction quality.
Autoencoder Class API¶
uv_datascience_project_template.lit_auto_encoder
¶
LitAutoEncoder(encoder, decoder)
¶
Bases: LightningModule
A simple autoencoder model.
PARAMETER | DESCRIPTION |
---|---|
encoder
|
The encoder component, responsible for encoding input data.
TYPE:
|
decoder
|
The decoder component, responsible for decoding encoded data.
TYPE:
|
Source code in src/uv_datascience_project_template/lit_auto_encoder.py
16 17 18 19 |
|
configure_optimizers()
¶
Configure the Adam optimizer.
Source code in src/uv_datascience_project_template/lit_auto_encoder.py
42 43 44 45 |
|
training_step(batch, batch_idx)
¶
Performs a single training step for the model.
PARAMETER | DESCRIPTION |
---|---|
batch
|
A tuple containing the input data (x) and the corresponding labels (y).
TYPE:
|
batch_idx
|
The index of the current batch.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
The computed loss for the current training step.
TYPE:
|
Source code in src/uv_datascience_project_template/lit_auto_encoder.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|