autoencoder.models#

Module Contents#

Classes#

Encoder

Feature selection encoder

Decoder

Standard decoder. It generates a network from input_size to output_size. The layers are generates as

ConcreteAutoencoder

Trains a concrete autoencoder

BaseDecoder

Internal module used as a base for the FCNDecoder and the SphericalDecoder.

FCNDecoder

Fully Connected Network decoder

SphericalDecoder

Spherical decoder

Functions#

init_weights_orthogonal(m)

If Pytorch module is Linear then initialize the according to torch.nn.init.orthogonal

autoencoder.models.init_weights_orthogonal(m)#

If Pytorch module is Linear then initialize the according to torch.nn.init.orthogonal

Parameters

m (torch.nn.Module) – input module

class autoencoder.models.Encoder(input_size, output_size, max_temp=10.0, min_temp=0.1, reg_threshold=3.0, reg_eps=1e-10)#

Bases: torch.nn.Module

Feature selection encoder Implemented according to “Concrete Autoencoders for Differentiable Feature Selection and Reconstruction.[1].

Parameters
  • input_size (int) – size of the input layer. Should be the same as the output_size of the decoder.

  • output_size (int) – size of the latent layer. Should be the same as the input_size of the decoder.

  • max_temp (float) – maximum temperature for Gumble Softmax. Defaults to 10.0.

  • min_temp (float) – minimum temperature for Gumble Softmax. Defaults to 0.1.

  • reg_threshold (float) – regularization threshold. The encoder will be penalized when the sum of probabilities for a selection neuron exceed this threshold. Defaults to 0.3.

  • reg_eps (float) – regularization epsilon. Minimum value for the clamped softmax function in regularization term. Defaults to 1e-10.

property latent_features(self)#
forward(self, x)#

Uses the trained encoder to make inferences.

Parameters

x (torch.Tensor) – input data. Should be the same size as the encoder input.

Returns

encoder output of size output_size.

Return type

torch.Tensor

update_temp(self, current_epoch, max_epochs)#
Return type

torch.Tensor

calc_mean_max(self)#
Return type

torch.Tensor

regularization(self)#

Regularization term according to https://homes.esat.kuleuven.be/~abertran/reports/TS_JNE_2021.pdf. The sum of probabilities for a selection neuron is penalized if its larger than the threshold value. The returned value is summed with the loss function.

Return type

float

class autoencoder.models.Decoder(input_size, output_size, n_hidden_layers, negative_slope=0.2)#

Bases: torch.nn.Module

Standard decoder. It generates a network from input_size to output_size. The layers are generates as follows: `python import numpy as np step_size = abs(output_size - input_size) // n_hidden_layers layer_sizes = np.arange(input_size, output_size, step_size) `

Parameters
  • input_size (int) – size of the latent layer. Should be the same as the output_size of the encoder.

  • output_size (int) – size of the output layer. Should be the same as input_size of the encoder.

  • n_hidden_layers (int) – number of hidden layers. If 0 then the input will be directly connected to the

  • output.

  • negative_slope (float) – negative slope for the Leaky ReLu activation layer. Defaults to 0.2.

forward(self, x)#

Uses the trained decoder to make inferences.

Parameters

x (torch.Tensor) – input data. Should be the same size as the decoder input.

Returns

decoder output of size output_size.

Return type

torch.Tensor

class autoencoder.models.ConcreteAutoencoder(input_output_size=1344, latent_size=500, decoder_hidden_layers=2, learning_rate=0.001, max_temp=10.0, min_temp=0.1, reg_lambda=0.0, reg_threshold=1.0)#

Bases: pytorch_lightning.LightningModule

Trains a concrete autoencoder Implemented according to “Concrete Autoencoders for Differentiable Feature Selection and Reconstruction.[1].

Parameters
  • input_output_size (int) – size of the input and output layer.

  • latent_size (int) – size of the latent layer.

  • decoder_hidden_layers (int) – number of hidden layers for the decoder. Defaults to 2.

  • learning_rate (float) – learning rate for the optimizer. Defaults to 1e-3.

  • max_temp (float) – maximum temperature for Gumble Softmax. Defaults to 10.0.

  • min_temp (float) – minimum temperature for Gumble Softmax. Defaults to 0.1.

  • reg_lambda (float) – how much weight to apply to the regularization term. If the value is 0.0 then no regularization will be applied. Defaults to 0.0.

  • reg_threshold (float) – regularization threshold. The encoder will be penalized when the sum of probabilities for a selection neuron exceed this threshold. Defaults to 1.0.

forward(self, x)#

Uses the trained autoencoder to make inferences.

Parameters

x (torch.Tensor) – input data. Should be the same size as encoder input.

Returns

(encoder output, decoder output)

Return type

tuple[torch.Tensor, torch.Tensor]

configure_optimizers(self)#
Return type

torch.optim.Adam

training_step(self, batch, batch_idx)#
Parameters
  • batch (torch.Tensor) –

  • batch_idx (int) –

Return type

torch.Tensor

validation_step(self, batch, batch_idx)#
Parameters
  • batch (torch.Tensor) –

  • batch_idx (int) –

Return type

torch.Tensor

test_step(self, batch, batch_idx, dataloader_idx)#
Parameters
  • batch (torch.Tensor) –

  • batch_idx (int) –

  • dataloader_idx (int) –

Return type

torch.Tensor

on_train_epoch_start(self)#
Return type

None

on_epoch_end(self)#
Return type

None

_shared_eval(self, batch, dataloader_idx, prefix)#

Calculate the loss for a batch.

Parameters
  • batch (torch.Tensor) – batch data.

  • batch_idx (int) – batch id.

  • prefix (str) – prefix for logging.

  • dataloader_idx (int) –

Returns

calculated loss.

Return type

torch.Tensor

class autoencoder.models.BaseDecoder(learning_rate, *args, **kwargs)#

Bases: pytorch_lightning.LightningModule

Internal module used as a base for the FCNDecoder and the SphericalDecoder.

Parameters

learning_rate (float) – Learning rate

configure_optimizers(self)#
Return type

torch.optim.Adam

training_step(self, batch, batch_idx)#
Parameters
  • batch (Dict[str, torch.Tensor]) –

  • batch_idx (int) –

Return type

torch.Tensor

validation_step(self, batch, batch_idx)#
Parameters
  • batch (torch.Tensor) –

  • batch_idx (int) –

Return type

torch.Tensor

test_step(self, batch, batch_idx, dataloader_idx)#
Parameters
  • batch (torch.Tensor) –

  • batch_idx (int) –

  • dataloader_idx (int) –

Return type

torch.Tensor

_shared_eval(self, batch, batch_idx, prefix)#

Calculate the loss for a batch.

Parameters
  • batch (torch.Tensor) – batch data.

  • batch_idx (int) – batch id.

  • prefix (str) – prefix for logging.

Returns

calculated loss.

Return type

torch.Tensor

class autoencoder.models.FCNDecoder(input_size, output_size, hidden_layers=2, learning_rate=0.001)#

Bases: BaseDecoder

Fully Connected Network decoder

Parameters
  • input_size (int) – input size of the network

  • output_size (int) – output size of the network

  • hidden_layers (int) – number of hidden layers. Defaults to 2.

  • learning_rate (float) – learning rate. Defaults to 1e-3.

forward(self, x)#
class autoencoder.models.SphericalDecoder(parameters_file_path, sh_degree, n_shells, learning_rate=0.001)#

Bases: BaseDecoder

Spherical decoder

Parameters
  • parameters_file_path (str) – Path string of the parameters file

  • sh_degree (int) – Spherical Harmonics degree

  • n_shells (int) – Number of b-values

  • learning_rate (float) – Learning rate. Defaults to 1e-3.

forward(self, x)#
Parameters

x (torch.Tensor) –