Layers  astroNN.nn.layers¶
astroNN provides some customized layers which built on Keras and Tensorflow. Thus they are compatible with Keras with Tensorflow backend or tensorflow.keras. You can just treat astroNN customized layers as conventional Keras layers.
Monte Carlo Dropout Layer¶

class
astroNN.nn.layers.
MCDropout
(rate, disable=False, noise_shape=None, name=None, **kwargs)¶ Dropout Layer for Bayesian Neural Network, this layer will always on regardless the learning phase flag
Parameters:  rate (float) – Dropout Rate between 0 and 1
 disable (boolean) – Dropout on or off
Returns: A layer
Return type: History: 2018Feb05  Written  Henry Leung (University of Toronto)

call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor
MCDropout is basically Keras’s Dropout layer without seed argument support. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase.
Dropout can be described by the following formula, lets say we have \(i\) neurones after activation with value \(y_i\)
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
b_dropout = MCDropout(0.2)(some_keras_layer)
return model
If you really want to disable the dropout, you do it by
# Your keras_model define here, assuming you are using functional API
b_dropout = MCDropout(0.2, disable=True)(some_keras_layer)
Monte Carlo Dropout with Continuous Relaxation Layer Wrapper¶

class
astroNN.nn.layers.
MCConcreteDropout
(layer, weight_regularizer=5e13, dropout_regularizer=0.0001, init_min=0.1, init_max=0.2, disable=False, **kwargs)¶  Monte Carlo Dropout with Continuous Relaxation Layer Wrapper This layer will learn the dropout probabilityarXiv:1705.07832
Parameters: layer (keras.layers.Layer) – The layer to be applied concrete dropout Returns: A layer Return type: object History: 2018Mar04  Written  Henry Leung (University of Toronto) 
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor

MCConcreteDropout is an implementation of arXiv:1705.07832, modified from the original implementation here. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase. This layer should be only used for experimental purpose only as it has not been tested rigorously. MCConcreteDropout is technically a layer wrapper instead of a standard layer, so it needs to take a layer as an input argument.
The main difference between MCConcreteDropout and standard bernoulli dropout is MCConcreteDropout learns dropout rate during training instead of a fixed probability. Turning/learning dropout rate is not a novel idea, it can be traced back to one of the original paper arXiv:1506.02557 on variational dropout. But MCConcreteDropout focuses on the role and importance of dropout with Bayesian technique.
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
c_dropout = MCConcreteDropout(some_keras_layer)(previous_layer)
return model
If you really want to disable the dropout, you do it by
# Your keras_model define here, assuming you are using functional API
c_dropout = MCConcreteDropout((some_keras_layer), disable=True)(previous_layer)
Monte Carlo Spatial Dropout Layer¶
MCSpatialDropout1D should be used with Conv1D and MCSpatialDropout2D should be used with Conv2D

class
astroNN.nn.layers.
MCSpatialDropout1D
(rate, disable=False, **kwargs)¶ Spatial 1D version of Dropout of Dropout Layer for Bayesian Neural Network, this layer will always regardless the learning phase flag
Parameters:  rate (float) – Dropout Rate between 0 and 1
 disable (boolean) – Dropout on or off
Returns: A layer
Return type: History: 2018Mar07  Written  Henry Leung (University of Toronto)

call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor

class
astroNN.nn.layers.
MCSpatialDropout2D
(rate, disable=False, **kwargs)¶ Spatial 2D version of Dropout of Dropout Layer for Bayesian Neural Network, this layer will always regardless the learning phase flag
Parameters:  rate (float) – Dropout Rate between 0 and 1
 disable (boolean) – Dropout on or off
Returns: A layer
Return type: History: 2018Mar07  Written  Henry Leung (University of Toronto)

call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor
MCSpatialDropout1D and MCSpatialDropout2D are basically Keras’s Spatial Dropout layer without seed and noise_shape argument support. Moreover, the layers will ignore Keras’s learning phase flag, so the layers will always stays on even in prediction phase.
This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead.
For technical detail, you can refer to the original paper arXiv:1411.4280
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
b_dropout = MCSpatialDropout1D(0.2)(keras_conv_layer)
return model
If you really want to disable the dropout, you do it by
# Your keras_model define here, assuming you are using functional API
b_dropout = MCSpatialDropout1D(0.2, disable=True)(keras_conv_layer)
Monte Carlo Gaussian Dropout Layer¶

class
astroNN.nn.layers.
MCGaussianDropout
(rate, disable=False, name=None, **kwargs)¶ Dropout Layer for Bayesian Neural Network, this layer will always on regardless the learning phase flag standard deviation sqrt(rate / (1  rate))
Parameters:  rate (float) – Dropout Rate between 0 and 1
 disable (boolean) – Dropout on or off
Returns: A layer
Return type: History: 2018Mar07  Written  Henry Leung (University of Toronto)

call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor
MCGaussianDropout is basically Keras’s Dropout layer without seed argument support. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase.
MCGaussianDropout should be used with caution for Bayesian Neural Network: https://arxiv.org/abs/1711.02989
Gaussian Dropout can be described by the following formula, lets say we have \(i\) neurones after activation with value \(y_i\)
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
b_dropout = MCGaussianDropout(0.2)(some_keras_layer)
return model
If you really want to disable the dropout, you do it by
# Your keras_model define here, assuming you are using functional API
b_dropout = MCGaussianDropout(0.2, disable=True)(some_keras_layer)
Monte Carlo Batch Normalization Layer¶

class
astroNN.nn.layers.
MCBatchNorm
(disable=False, name=None, **kwargs)¶ Monte Carlo Batch Normalization Layer for Bayesian Neural Network
Parameters: disable (boolean) – Dropout on or off Returns: A layer Return type: object History: 2018Apr12  Written  Henry Leung (University of Toronto) 
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor

MCBatchNorm is a layer doing Batch Normalization originally described in arViX: https://arxiv.org/abs/1502.03167
MCBatchNorm should be used with caution for Bayesian Neural Network: https://openreview.net/forum?id=BJlrSmbAZ
Batch Normalization can be described by the following formula, lets say we have \(N\) neurones after activation for a layer
MCBatchNorm can be imported by
from astroNN.nn.layers import MCBatchNorm
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
b_dropout = MCBatchNorm()(some_keras_layer)
return model
Error Propagation Layer¶

class
astroNN.nn.layers.
ErrorProp
(stddev, name=None, **kwargs)¶ Propagate Error Layer, do nothing during training, add gaussian noise during testing phase
Parameters: stddev (float) – Known 1S.D. Uncertainty in input data Returns: A layer Return type: object History: 2018Feb05  Written  Henry Leung (University of Toronto) 
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor

ErrorProp is a layer designed to do error propagation in neural network. It will acts as an identity transformation layer during training phase but add gaussian noise to input during test phase. The idea is if you have known uncertainty in input, and you want to understand how input uncertainty (more specifically this layer assuming the uncertainty is Gaussian) affects the output. Since this layer add random known Gaussian uncertainty to the input, you can run model prediction a few times to get some predictions, mean of those predictions will be the final prediction and standard derivation of the predictions will be the propagated uncertainty.
ErrorProp can be imported by
from astroNN.nn.layers import ErrorProp
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
input = Input(.....)
input_with_error = ErrorProp(some_gaussian_tensor)(input)
return model
KLDivergence Layer for Variational Autoencoder¶

class
astroNN.nn.layers.
KLDivergenceLayer
(name=None, **kwargs)¶  Identity transform layer that adds KL divergence to the final model losses.KL divergence used to force the latent space match the prior (in this case its unit gaussian)
Returns: A layer Return type: object History: 2018Feb05  Written  Henry Leung (University of Toronto) 
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied, concatenated tf.tensor of mean and std in latent space Returns: Tensor after applying the layer Return type: tf.Tensor

KLDivergenceLayer is a layer designed to be used in Variational Autoencoder. It will acts as an identity transformation layer but will add KLdivergence to the total loss.
KLDivergenceLayer can be imported by
from astroNN.nn.layers import KLDivergenceLayer
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
z_mu = Encoder_Mean_Layer(.....)
z_log_var = Encoder_Var_Layer(.....)
z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var])
# And then decoder or whatever
return model
Polynomial Fitting Layer¶

class
astroNN.nn.layers.
PolyFit
(deg=1, output_units=1, use_xbias=True, init_w=None, name=None, activation=None, kernel_regularizer=None, kernel_constraint=None)¶ ndeg polynomial fitting layer which acts as an neural network layer to be optimized
Parameters:  deg (int) – degree of polynomial
 output_units (int) – number of output neurons
 use_xbias (bool) – If True, then fitting output=P(inputs)+inputs, else fitting output=P(inputs)
 init_w (Union[NoneType, list]) – [Optional] list of initial weights if there is any, the list should be [ndegree, input_size, output_size]
 name (Union[NoneType, str]) – [Optional] name of the layer
 activation (Union[NoneType, str]) – [Optional] activation, default is ‘linear’
 kernel_regularizer (Union[NoneType, str]) – [Optional] kernel regularizer
 kernel_constraint (Union[NoneType, str]) – [Optional] kernel constraint
Returns: A layer
Return type: History: 2018Jul24  Written  Henry Leung (University of Toronto)

call
(inputs)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer which is just ndeg P(inputs) Return type: tf.Tensor
PolyFit is a layer designed to do ndegree polynomial fitting in a neural network style by treating coefficient as neural network weights and optimize them by neural network optimizer. The fitted polynomial(s) are in the following form (you can specify initial weights by init_w=[[[\(w_0\)]], [[\(w_1\)]], …, [[\(w_n\)]]]) for a single input and output value
For multiple i input values and j output values and ndeg polynomial (you can specify initial weights by init_w=[[[\(w_{0, 1, 0}\), \(w_{0, 1, 1}\), …, \(w_{0, 1, j}\)], [\(w_{0, 2, 0}\), \(w_{0, 2, 1}\), …, \(w_{0, 2, j}\)], … [\(w_{0, i, 0}\), \(w_{0, i, 1}\), …, \(w_{0, i, j}\)]], …, [[\(w_{n, 1, 0}\), \(w_{n, 1, 1}\), …, \(w_{n, 1, j}\)], [\(w_{n, 2, 0}\), \(w_{n, 2, 1}\), …, \(w_{n, 2, j}\)], … [\(w_{n, i, 0}\), \(w_{n, i, 1}\), …, \(w_{n, i, j}\)]]])
and the polynomial is as the following form for For multiple i input values and j output values and ndeg polynomial
PolyFit can be imported by
from astroNN.nn.layers import PolyFit
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
input = Input(.....)
output = PolyFit(deg=1)(input)
return model(inputs=input, outputs=output)
To show it works as a polynomial, you can refer the following example:
import numpy as np
from astroNN.nn.layers import PolyFit
from astroNN.shared.nn_tools import cpu_fallback
from astroNN.config import keras_import_manager
keras = keras_import_manager() # either import keras or tf.keras
cpu_fallback() # force tf to use CPU
Input = keras.layers.Input
Model = keras.models.Model
# Data preparation
polynomial_coefficient = [0.1, 0.05]
random_xdata = np.random.normal(0, 3, (100, 1))
random_ydata = polynomial_coefficient[1] * random_xdata + polynomial_coefficient[0]
input = Input(shape=[1, ])
# set initial weights
output = PolyFit(deg=1, use_xbias=False, init_w=[[[0.1]], [[0.05]]], name='polyfit')(input)
model = Model(inputs=input, outputs=output)
# predict without training (i.e. without gradient updates)
np.allclose(model.predict(random_xdata), random_ydata)
>>> True # True means prediction approx close enough
Mean and Variance Calculation Layer for Bayesian Neural Net¶

class
astroNN.nn.layers.
FastMCInferenceMeanVar
(name=None, **kwargs)¶ Take mean and variance of the results of a TimeDistributed layer, assuming axis=1 is the timestamp axis
Returns: A layer Return type: object History: 2018Feb02  Written  Henry Leung (University of Toronto)2018Apr13  Update  Henry Leung (University of Toronto)
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer Return type: tf.Tensor

If you wnat fast MC inference on GPU and you are using keras models, you should just use FastMCInference.
FastMCInferenceMeanVar is a layer designed to be used with Bayesian Neural Network with Dropout Variational Inference. FastMCInferenceMeanVar should be used with FastMCInference in general. The advantage of FastMCInferenceMeanVar layer is you can copy the data and calculate the mean and variance on GPU (if any) when you are doing dropout variational inference.
FastMCInferenceMeanVar can be imported by
from astroNN.nn.layers import FastMCInferenceMeanVar
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
input = Input(.....)
monte_carlo_dropout = FastMCInference(mc_num_here)
# some layer here, you should use MCDropout from astroNN instead of Dropout from Tensorflow:)
result_mean_var = FastMCInferenceMeanVar()(previous_layer_here)
return model
model.compile(loss=loss_func_here, optimizer=optimizer_here)
# Use the model to predict
output = model.predict(x)
# with dropout variational inference
# prediction and model uncertainty (variance) from the model
mean = output[0]
variance = output[1]
Repeat Vector Layer for Bayesian Neural Net¶

class
astroNN.nn.layers.
FastMCRepeat
(n, name=None, **kwargs)¶ Prepare data to do inference, Repeats the input n times at axis=1
Parameters: n (int) – Number of Monte Carlo integration Returns: A layer Return type: object History: 2018Feb02  Written  Henry Leung (University of Toronto)2018Apr13  Update  Henry Leung (University of Toronto)
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer which is the repeated Tensor Return type: tf.Tensor

If you wnat fast MC inference on GPU and you are using keras models, you should just use FastMCInference.
FastMCRepeat is a layer to repeat training data to do Monte Carlo integration required by Bayesian Neural Network.
FastMCRepeat is a layer designed to be used with Bayesian Neural Network with Dropout Variational Inference. FastMCRepeat should be used with FastMCInferenceMeanVar in general. The advantage of FastMCRepeat layer is you can copy the data and calculate the mean and variance on GPU (if any) when you are doing dropout variational inference.
FastMCRepeat can be imported by
from astroNN.nn.layers import FastMCRepeat
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
input = Input(.....)
monte_carlo_dropout = FastMCRepeat(mc_num_here)
# some layer here, you should use MCDropout from astroNN instead of Dropout from Tensorflow:)
result_mean_var = FastMCInferenceMeanVar()(previous_layer_here)
return model
model.compile(loss=loss_func_here, optimizer=optimizer_here)
# Use the model to predict
output = model.predict(x)
# with dropout variational inference
# prediction and model uncertainty (variance) from the model
mean = output[0]
variance = output[1]
Fast Monte Carlo Integration Layer for Keras Model¶

class
astroNN.nn.layers.
FastMCInference
(n, **kwargs)¶ Turn a model for fast MC Dropout Inference on GPU
Parameters: n (int) – Number of Monte Carlo integration Returns: A layer Return type: object History: 2018Apr13  Written  Henry Leung (University of Toronto) 
__call__
(model)¶ Parameters: model (Union[keras.Model, keras.Sequential]) – Keras model to be accelerated Returns: Accelerated Keras model Return type: Union[keras.Model, keras.Sequential]

FastMCInference is a layer designed for fast Monte Carlo Inference on GPU. One of the main challenge of MC integration on GPU is you want the data stay on GPU and you do MC integration on GPU entirely, moving data from drives to GPU is a very expensive operation. FastMCInference will create a new keras model such that it will replicate data on GPU, do Monte Carlo integration and calculate mean and variance on GPU, and get back the result.
Benchmark (Nvidia GTX1060 6GB): 98,000 7514 pixles APOGEE Spectra, traditionally the 25 forward pass spent ~270 seconds, by using FastMCInference, it only spent ~65 seconds to do the exact same task.
It can only be used with Keras model. If you are using customised model purely with Tensorflow, you should use FastMCRepeat and FastMCInferenceMeanVar
You can import the function from astroNN by
from astroNN.nn.layers import FastMCInference
# keras_model is your keras model with 1 output which is a concatenation of labels prediction and predictive variance
keras_model = Model(....)
# fast_mc_model is the new keras model capable to do fast monte carlo integration on GPU
fast_mc_model = FastMCInference(keras_model)
# You can just use keras API with the new model such as
result = fast_mc_model.predict(.....)
# here is the result dimension
predictions = result[:, :(result.shape[1] // 2), 0] # mean prediction
mc_dropout_uncertainty = result[:, :(result.shape[1] // 2), 1] * (self.labels_std ** 2) # model uncertainty
predictions_var = np.exp(result[:, (result.shape[1] // 2):, 0]) * (self.labels_std ** 2) # predictive uncertainty
Gradient Stopping Layer¶

class
astroNN.nn.layers.
StopGrad
(name=None, always_on=False, **kwargs)¶ Stop gradient backpropagation via this layer during training, act as an identity layer during testing by default.
Parameters: always_on (bool) – Default False which will on during train time and off during test time. True to enable it in every situation Returns: A layer Return type: object History: 2018May23  Written  Henry Leung (University of Toronto) 
call
(inputs, training=None)¶ Note: Equivalent to __call__() Parameters: inputs (tf.Tensor) – Tensor to be applied Returns: Tensor after applying the layer which is just the original tensor Return type: tf.Tensor

It uses tf.stop_gradient
and acts as a Keras layer.
StopGrad can be imported by
from astroNN.nn.layers import StopGrad
It can be used with keras or tensorflow.keras, you just have to import the function from astroNN
def keras_model():
# Your keras_model define here, assuming you are using functional API
input = Input(.....)
# some layers ...
stopped_grad_layer = StopGrad()(...)
# some layers ...
return model
For example, if you have a model with multiple branches and you only want error backpropagate to one but not the other,
from astroNN.nn.layers import StopGrad
# we use zeros loss just to demonstrate StopGrad works and no error backprop from StopGrad layer
from astroNN.nn.losses import zeros_loss
import numpy as np
from astroNN.shared.nn_tools import cpu_fallback
from astroNN.config import keras_import_manager
keras = keras_import_manager() # either import keras or tf.keras
cpu_fallback() # force tf to use CPU
Input = keras.layers.Input
Dense = keras.layers.Dense
concatenate = keras.layers.concatenate
Model = keras.models.Model
# Data preparation
random_xdata = np.random.normal(0, 1, (100, 7514))
random_ydata = np.random.normal(0, 1, (100, 25))
input2 = Input(shape=[7514])
dense1 = Dense(100, name='normaldense')(input2)
dense2 = Dense(25, name='wanted_dense')(input2)
dense2_stopped = StopGrad(name='stopgrad', always_on=True)(dense2)
output2 = Dense(25, name='wanted_dense2')(concatenate([dense1, dense2_stopped]))
model2 = Model(inputs=input2, outputs=[output2, dense2])
model2.compile(optimizer=keras.optimizers.SGD(lr=0.1),
loss={'wanted_dense2': 'mse', 'wanted_dense': zeros_loss})
weight_b4_train = model2.get_layer(name='wanted_dense').get_weights()[0]
weight_b4_train2 = model2.get_layer(name='normaldense').get_weights()[0]
model2.fit(random_xdata, [random_ydata, random_ydata])
weight_a4_train = model2.get_layer(name='wanted_dense').get_weights()[0]
weight_a4_train2 = model2.get_layer(name='normaldense').get_weights()[0]
print(np.all(weight_b4_train == weight_a4_train))
>>> True # meaning all the elements from Dense with StopGrad layer are equal due to no gradient update
print(np.all(weight_b4_train2 == weight_a4_train2))
>>> False # meaning not all the elements from normal Dense layer are equal due to gradient update