Layers

astroNN provides some customized layers under astroNN.nn.layers module which built on tensorflow.keras. You can just treat astroNN customized layers as conventional Keras layers.

Monte Carlo Dropout Layer

class astroNN.nn.layers.MCDropout(*args, **kwargs)[source]

Dropout Layer for Bayesian Neural Network, this layer will always on regardless the learning phase flag

Parameters:
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns:

A layer

Return type:

object

History:

2018-Feb-05 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

MCDropout is basically Keras’s Dropout layer without seed argument support. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase.

Dropout can be described by the following formula, lets say we have \(i\) neurones after activation with value \(y_i\)

\[\begin{split}r_{i} = \text{Bernoulli} (p) \\ \hat{y_i} = r_{i} * y_i\end{split}\]

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCDropout(0.2)(some_keras_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2b_dropout = MCDropout(0.2, disable=True)(some_keras_layer)

Monte Carlo Spatial Dropout Layer

MCSpatialDropout1D should be used with Conv1D and MCSpatialDropout2D should be used with Conv2D

class astroNN.nn.layers.MCSpatialDropout1D(*args, **kwargs)[source]

Spatial 1D version of Dropout of Dropout Layer for Bayesian Neural Network, this layer will always regardless the learning phase flag

Parameters:
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns:

A layer

Return type:

object

History:

2018-Mar-07 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()
Returns:

Dictionary of configuration

Return type:

dict

class astroNN.nn.layers.MCSpatialDropout2D(*args, **kwargs)[source]

Spatial 2D version of Dropout of Dropout Layer for Bayesian Neural Network, this layer will always regardless the learning phase flag

Parameters:
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns:

A layer

Return type:

object

History:

2018-Mar-07 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()
Returns:

Dictionary of configuration

Return type:

dict

MCSpatialDropout1D and MCSpatialDropout2D are basically Keras’s Spatial Dropout layer without seed and noise_shape argument support. Moreover, the layers will ignore Keras’s learning phase flag, so the layers will always stays on even in prediction phase.

This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead.

For technical detail, you can refer to the original paper arXiv:1411.4280

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCSpatialDropout1D(0.2)(keras_conv_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2b_dropout = MCSpatialDropout1D(0.2, disable=True)(keras_conv_layer)

Monte Carlo Gaussian Dropout Layer

class astroNN.nn.layers.MCGaussianDropout(*args, **kwargs)[source]

Dropout Layer for Bayesian Neural Network, this layer will always on regardless the learning phase flag standard deviation sqrt(rate / (1 - rate))

Parameters:
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns:

A layer

Return type:

object

History:

2018-Mar-07 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

MCGaussianDropout is basically Keras’s Dropout layer without seed argument support. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase.

MCGaussianDropout should be used with caution for Bayesian Neural Network: https://arxiv.org/abs/1711.02989

Gaussian Dropout can be described by the following formula, lets say we have \(i\) neurones after activation with value \(y_i\)

\[\begin{split}r_{i} = \mathcal{N}\bigg(1, \sqrt{\frac{p}{1-p}}\bigg) \\ \hat{y_i} = r_{i} * y_i\end{split}\]

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCGaussianDropout(0.2)(some_keras_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2b_dropout = MCGaussianDropout(0.2, disable=True)(some_keras_layer)

Error Propagation Layer

class astroNN.nn.layers.ErrorProp(*args, **kwargs)[source]

Propagate Error Layer by adding gaussian noise (mean=0, std=err) during testing phase from input_err tensor

Returns:

A layer

Return type:

object

History:

2018-Feb-05 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (list[tf.Tensor]) – a list of Tensor which [input_tensor, input_error_tensor]

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

ErrorProp is a layer designed to do error propagation in neural network. It will acts as an identity transformation layer during training phase but add gaussian noise to input during test phase. The idea is if you have known uncertainty in input, and you want to understand how input uncertainty (more specifically this layer assuming the uncertainty is Gaussian) affects the output. Since this layer add random known Gaussian uncertainty to the input, you can run model prediction a few times to get some predictions, mean of those predictions will be the final prediction and standard derivation of the predictions will be the propagated uncertainty.

ErrorProp can be imported by

1from astroNN.nn.layers import ErrorProp

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    input_with_error = ErrorProp()([input, input_error])
5    return model

KL-Divergence Layer for Variational Autoencoder

class astroNN.nn.layers.KLDivergenceLayer(*args, **kwargs)[source]
Identity transform layer that adds KL divergence to the final model losses.
KL divergence used to force the latent space match the prior (in this case its unit gaussian)
Returns:

A layer

Return type:

object

History:

2018-Feb-05 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied, concatenated tf.tensor of mean and std in latent space

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

KLDivergenceLayer is a layer designed to be used in Variational Autoencoder. It will acts as an identity transformation layer but will add KL-divergence to the total loss.

KLDivergenceLayer can be imported by

1from astroNN.nn.layers import KLDivergenceLayer

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    z_mu = Encoder_Mean_Layer(.....)
4    z_log_var = Encoder_Var_Layer(.....)
5    z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var])
6    # And then decoder or whatever
7    return model

Mean and Variance Calculation Layer for Bayesian Neural Net

class astroNN.nn.layers.FastMCInferenceMeanVar(*args, **kwargs)[source]

Take mean and variance of the results of a TimeDistributed layer, assuming axis=1 is the timestamp axis

Returns:

A layer

Return type:

object

History:
2018-Feb-02 - Written - Henry Leung (University of Toronto)
2018-Apr-13 - Update - Henry Leung (University of Toronto)
call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

If you wnat fast MC inference on GPU and you are using keras models, you should just use FastMCInference.

FastMCInferenceMeanVar is a layer designed to be used with Bayesian Neural Network with Dropout Variational Inference. FastMCInferenceMeanVar should be used with FastMCInference in general. The advantage of FastMCInferenceMeanVar layer is you can copy the data and calculate the mean and variance on GPU (if any) when you are doing dropout variational inference.

FastMCInferenceMeanVar can be imported by

1from astroNN.nn.layers import FastMCInferenceMeanVar

And here is an example of usage

 1def keras_model():
 2    # Your keras_model define here, assuming you are using functional API
 3    input = Input(.....)
 4    monte_carlo_dropout = FastMCInference(mc_num_here)
 5    # some layer here, you should use MCDropout from astroNN instead of Dropout from Tensorflow:)
 6    result_mean_var = FastMCInferenceMeanVar()(previous_layer_here)
 7    return model
 8
 9model.compile(loss=loss_func_here, optimizer=optimizer_here)
10
11# Use the model to predict
12output = model.predict(x)
13
14# with dropout variational inference
15# prediction and model uncertainty (variance) from the model
16mean = output[0]
17variance = output[1]

Repeat Vector Layer for Bayesian Neural Net

class astroNN.nn.layers.FastMCRepeat(*args, **kwargs)[source]

Prepare data to do inference, Repeats the input n times at axis=1

Parameters:

n (int) – Number of Monte Carlo integration

Returns:

A layer

Return type:

object

History:
2018-Feb-02 - Written - Henry Leung (University of Toronto)
2018-Apr-13 - Update - Henry Leung (University of Toronto)
call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer which is the repeated Tensor

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

If you wnat fast MC inference on GPU and you are using keras models, you should just use FastMCInference.

FastMCRepeat is a layer to repeat training data to do Monte Carlo integration required by Bayesian Neural Network.

FastMCRepeat is a layer designed to be used with Bayesian Neural Network with Dropout Variational Inference. FastMCRepeat should be used with FastMCInferenceMeanVar in general. The advantage of FastMCRepeat layer is you can copy the data and calculate the mean and variance on GPU (if any) when you are doing dropout variational inference.

FastMCRepeat can be imported by

1from astroNN.nn.layers import FastMCRepeat

And here is an example of usage

 1def keras_model():
 2    # Your keras_model define here, assuming you are using functional API
 3    input = Input(.....)
 4    monte_carlo_dropout = FastMCRepeat(mc_num_here)
 5    # some layer here, you should use MCDropout from astroNN instead of Dropout from Tensorflow:)
 6    result_mean_var = FastMCInferenceMeanVar()(previous_layer_here)
 7    return model
 8
 9model.compile(loss=loss_func_here, optimizer=optimizer_here)
10
11# Use the model to predict
12output = model.predict(x)
13
14# with dropout variational inference
15# prediction and model uncertainty (variance) from the model
16mean = output[0]
17variance = output[1]

Fast Monte Carlo Integration Layer for Keras Model

class astroNN.nn.layers.FastMCInference(n, model, **kwargs)[source]

Turn a model for fast Monte Carlo (Dropout, Flipout, etc) Inference on GPU

Parameters:

n (int) – Number of Monte Carlo integration

Returns:

A layer

Return type:

object

History:
2018-Apr-13 - Written - Henry Leung (University of Toronto)
2021-Apr-14 - Updated - Henry Leung (University of Toronto)
get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

FastMCInference is a layer designed for fast Monte Carlo Inference on GPU. One of the main challenge of MC integration on GPU is you want the data stay on GPU and you do MC integration on GPU entirely, moving data from drives to GPU is a very expensive operation. FastMCInference will create a new keras model such that it will replicate data on GPU, do Monte Carlo integration and calculate mean and variance on GPU, and get back the result.

Benchmark (Nvidia GTX1060 6GB): 98,000 7514 pixles APOGEE Spectra, traditionally the 25 forward pass spent ~270 seconds, by using FastMCInference, it only spent ~65 seconds to do the exact same task.

It can only be used with Keras model. If you are using customised model purely with Tensorflow, you should use FastMCRepeat and FastMCInferenceMeanVar

You can import the function from astroNN by

 1from astroNN.nn.layers import FastMCInference
 2
 3# keras_model is your keras model with 1 output which is a concatenation of labels prediction and predictive variance
 4keras_model = Model(....)
 5
 6# fast_mc_model is the new keras model capable to do fast monte carlo integration on GPU
 7fast_mc_model = FastMCInference(keras_model)
 8
 9# You can just use keras API with the new model such as
10result = fast_mc_model.predict(.....)
11
12# here is the result dimension
13predictions = result[:, :(result.shape[1] // 2), 0]  # mean prediction
14mc_dropout_uncertainty = result[:, :(result.shape[1] // 2), 1] * (self.labels_std ** 2)  # model uncertainty
15predictions_var = np.exp(result[:, (result.shape[1] // 2):, 0]) * (self.labels_std ** 2)  # predictive uncertainty

Gradient Stopping Layer

class astroNN.nn.layers.StopGrad(*args, **kwargs)[source]

Stop gradient backpropagation via this layer during training, act as an identity layer during testing by default.

Returns:

A layer

Return type:

object

History:

2018-May-23 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer which is just the original tensor

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

It uses tf.stop_gradient and acts as a Keras layer.

StopGrad can be imported by

1from astroNN.nn.layers import StopGrad

It can be used with keras or tensorflow.keras, you just have to import the function from astroNN

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    # some layers ...
5    stopped_grad_layer = StopGrad()(...)
6    # some layers ...
7    return model

For example, if you have a model with multiple branches and you only want error backpropagate to one but not the other,

 1from astroNN.nn.layers import StopGrad
 2# we use zeros loss just to demonstrate StopGrad works and no error backprop from StopGrad layer
 3from astroNN.nn.losses import zeros_loss
 4import numpy as np
 5from astroNN.shared.nn_tools import cpu_fallback
 6import keras
 7
 8cpu_fallback()  # force tf to use CPU
 9
10Input = keras.layers.Input
11Dense = keras.layers.Dense
12concatenate = keras.layers.concatenate
13Model = keras.models.Model
14
15# Data preparation
16random_xdata = np.random.normal(0, 1, (100, 7514))
17random_ydata = np.random.normal(0, 1, (100, 25))
18input2 = Input(shape=[7514])
19dense1 = Dense(100, name='normaldense')(input2)
20dense2 = Dense(25, name='wanted_dense')(input2)
21dense2_stopped = StopGrad(name='stopgrad', always_on=True)(dense2)
22output2 = Dense(25, name='wanted_dense2')(concatenate([dense1, dense2_stopped]))
23model2 = Model(inputs=input2, outputs=[output2, dense2])
24model2.compile(optimizer=keras.optimizers.SGD(lr=0.1),
25               loss={'wanted_dense2': 'mse', 'wanted_dense': zeros_loss})
26weight_b4_train = model2.get_layer(name='wanted_dense').get_weights()[0]
27weight_b4_train2 = model2.get_layer(name='normaldense').get_weights()[0]
28model2.fit(random_xdata, [random_ydata, random_ydata])
29weight_a4_train = model2.get_layer(name='wanted_dense').get_weights()[0]
30weight_a4_train2 = model2.get_layer(name='normaldense').get_weights()[0]
31
32print(np.all(weight_b4_train == weight_a4_train))
33# True  # meaning all the elements from Dense with StopGrad layer are equal due to no gradient update
34print(np.all(weight_b4_train2 == weight_a4_train2))
35# False  # meaning not all the elements from normal Dense layer are equal due to gradient update

Boolean Masking Layer

class astroNN.nn.layers.BoolMask(*args, **kwargs)[source]

Boolean Masking layer, please notice it is best to flatten input before using BoolMask

Parameters:

mask (np.ndarray) – numpy boolean array as a mask for incoming tensor

Returns:

A layer

Return type:

object

History:

2018-May-28 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note:

Equivalent to __call__()

Parameters:

inputs (tf.Tensor) – Tensor to be applied

Returns:

Tensor after applying the layer which is just the masked tensor

Return type:

tf.Tensor

get_config()[source]
Returns:

Dictionary of configuration

Return type:

dict

BoolMask takes numpy boolean array as layer initialization and mask the input tensor.

BoolMask can be imported by

1from astroNN.nn.layers import BoolMask

It can be used with keras or tensorflow.keras, you just have to import the function from astroNN

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    # some layers ...
5    stopped_grad_layer = BoolMask(mask=....)(...)
6    # some layers ...
7    return model