Layers

astroNN provides some customized layers under astroNN.nn.layers module which built on tensorflow.keras. You can just treat astroNN customized layers as conventional Keras layers.

Monte Carlo Dropout Layer

class astroNN.nn.layers.MCDropout(*args, **kwargs)[source]

Dropout Layer for Bayesian Neural Network, this layer will always on regardless the learning phase flag

Parameters
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns

A layer

Return type

object

History

2018-Feb-05 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

MCDropout is basically Keras’s Dropout layer without seed argument support. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase.

Dropout can be described by the following formula, lets say we have \(i\) neurones after activation with value \(y_i\)

\[\begin{split}r_{i} = \text{Bernoulli} (p) \\ \hat{y_i} = r_{i} * y_i\end{split}\]

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCDropout(0.2)(some_keras_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2b_dropout = MCDropout(0.2, disable=True)(some_keras_layer)

Monte Carlo Dropout with Continuous Relaxation Layer Wrapper

class astroNN.nn.layers.MCConcreteDropout(*args, **kwargs)[source]
Monte Carlo Dropout with Continuous Relaxation Layer Wrapper This layer will learn the dropout probability
arXiv:1705.07832
Parameters

layer (keras.layers.Layer) – The layer to be applied concrete dropout

Returns

A layer

Return type

object

History

2018-Mar-04 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

MCConcreteDropout is an implementation of arXiv:1705.07832, modified from the original implementation here. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase. This layer should be only used for experimental purpose only as it has not been tested rigorously. MCConcreteDropout is technically a layer wrapper instead of a standard layer, so it needs to take a layer as an input argument.

The main difference between MCConcreteDropout and standard bernoulli dropout is MCConcreteDropout learns dropout rate during training instead of a fixed probability. Turning/learning dropout rate is not a novel idea, it can be traced back to one of the original paper arXiv:1506.02557 on variational dropout. But MCConcreteDropout focuses on the role and importance of dropout with Bayesian technique.

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    c_dropout = MCConcreteDropout(some_keras_layer)(previous_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2c_dropout = MCConcreteDropout((some_keras_layer), disable=True)(previous_layer)

Monte Carlo Spatial Dropout Layer

MCSpatialDropout1D should be used with Conv1D and MCSpatialDropout2D should be used with Conv2D

class astroNN.nn.layers.MCSpatialDropout1D(*args, **kwargs)[source]

Spatial 1D version of Dropout of Dropout Layer for Bayesian Neural Network, this layer will always regardless the learning phase flag

Parameters
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns

A layer

Return type

object

History

2018-Mar-07 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()
Returns

Dictionary of configuration

Return type

dict

class astroNN.nn.layers.MCSpatialDropout2D(*args, **kwargs)[source]

Spatial 2D version of Dropout of Dropout Layer for Bayesian Neural Network, this layer will always regardless the learning phase flag

Parameters
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns

A layer

Return type

object

History

2018-Mar-07 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()
Returns

Dictionary of configuration

Return type

dict

MCSpatialDropout1D and MCSpatialDropout2D are basically Keras’s Spatial Dropout layer without seed and noise_shape argument support. Moreover, the layers will ignore Keras’s learning phase flag, so the layers will always stays on even in prediction phase.

This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead.

For technical detail, you can refer to the original paper arXiv:1411.4280

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCSpatialDropout1D(0.2)(keras_conv_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2b_dropout = MCSpatialDropout1D(0.2, disable=True)(keras_conv_layer)

Monte Carlo Gaussian Dropout Layer

class astroNN.nn.layers.MCGaussianDropout(*args, **kwargs)[source]

Dropout Layer for Bayesian Neural Network, this layer will always on regardless the learning phase flag standard deviation sqrt(rate / (1 - rate))

Parameters
  • rate (float) – Dropout Rate between 0 and 1

  • disable (boolean) – Dropout on or off

Returns

A layer

Return type

object

History

2018-Mar-07 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

MCGaussianDropout is basically Keras’s Dropout layer without seed argument support. Moreover, the layer will ignore Keras’s learning phase flag, so the layer will always stays on even in prediction phase.

MCGaussianDropout should be used with caution for Bayesian Neural Network: https://arxiv.org/abs/1711.02989

Gaussian Dropout can be described by the following formula, lets say we have \(i\) neurones after activation with value \(y_i\)

\[\begin{split}r_{i} = \mathcal{N}\bigg(1, \sqrt{\frac{p}{1-p}}\bigg) \\ \hat{y_i} = r_{i} * y_i\end{split}\]

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCGaussianDropout(0.2)(some_keras_layer)
4    return model

If you really want to disable the dropout, you do it by

1# Your keras_model define here, assuming you are using functional API
2b_dropout = MCGaussianDropout(0.2, disable=True)(some_keras_layer)

Monte Carlo Batch Normalization Layer

class astroNN.nn.layers.MCBatchNorm(*args, **kwargs)[source]

Monte Carlo Batch Normalization Layer for Bayesian Neural Network

Parameters

disable (boolean) – Dropout on or off

Returns

A layer

Return type

object

History

2018-Apr-12 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

MCBatchNorm is a layer doing Batch Normalization originally described in arViX: https://arxiv.org/abs/1502.03167

MCBatchNorm should be used with caution for Bayesian Neural Network: https://openreview.net/forum?id=BJlrSmbAZ

Batch Normalization can be described by the following formula, lets say we have \(N\) neurones after activation for a layer

\[N_{i} = \frac{N_{i} - \text{Mean}[N]}{\sqrt{\text{Var}[N]}}\]

MCBatchNorm can be imported by

1from astroNN.nn.layers import MCBatchNorm

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    b_dropout = MCBatchNorm()(some_keras_layer)
4    return model

Error Propagation Layer

class astroNN.nn.layers.ErrorProp(*args, **kwargs)[source]

Propagate Error Layer by adding gaussian noise (mean=0, std=err) during testing phase from input_err tensor

Returns

A layer

Return type

object

History

2018-Feb-05 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (list[tf.Tensor]) – a list of Tensor which [input_tensor, input_error_tensor]

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

ErrorProp is a layer designed to do error propagation in neural network. It will acts as an identity transformation layer during training phase but add gaussian noise to input during test phase. The idea is if you have known uncertainty in input, and you want to understand how input uncertainty (more specifically this layer assuming the uncertainty is Gaussian) affects the output. Since this layer add random known Gaussian uncertainty to the input, you can run model prediction a few times to get some predictions, mean of those predictions will be the final prediction and standard derivation of the predictions will be the propagated uncertainty.

ErrorProp can be imported by

1from astroNN.nn.layers import ErrorProp

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    input_with_error = ErrorProp()([input, input_error])
5    return model

KL-Divergence Layer for Variational Autoencoder

class astroNN.nn.layers.KLDivergenceLayer(*args, **kwargs)[source]
Identity transform layer that adds KL divergence to the final model losses.
KL divergence used to force the latent space match the prior (in this case its unit gaussian)
Returns

A layer

Return type

object

History

2018-Feb-05 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied, concatenated tf.tensor of mean and std in latent space

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

KLDivergenceLayer is a layer designed to be used in Variational Autoencoder. It will acts as an identity transformation layer but will add KL-divergence to the total loss.

KLDivergenceLayer can be imported by

1from astroNN.nn.layers import KLDivergenceLayer

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    z_mu = Encoder_Mean_Layer(.....)
4    z_log_var = Encoder_Var_Layer(.....)
5    z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var])
6    # And then decoder or whatever
7    return model

Polynomial Fitting Layer

class astroNN.nn.layers.PolyFit(*args, **kwargs)[source]

n-deg polynomial fitting layer which acts as an neural network layer to be optimized

Parameters
  • deg (int) – degree of polynomial

  • output_units (int) – number of output neurons

  • use_xbias (bool) – If True, then fitting output=P(inputs)+inputs, else fitting output=P(inputs)

  • init_w (Union[NoneType, list]) – [Optional] list of initial weights if there is any, the list should be [n-degree, input_size, output_size]

  • name (Union[NoneType, str]) – [Optional] name of the layer

  • activation (Union[NoneType, str]) – [Optional] activation, default is ‘linear’

  • kernel_regularizer (Union[NoneType, str]) – [Optional] kernel regularizer

  • kernel_constraint (Union[NoneType, str]) – [Optional] kernel constraint

Returns

A layer

Return type

object

History

2018-Jul-24 - Written - Henry Leung (University of Toronto)

call(inputs)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer which is just n-deg P(inputs)

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

PolyFit is a layer designed to do n-degree polynomial fitting in a neural network style by treating coefficient as neural network weights and optimize them by neural network optimizer. The fitted polynomial(s) are in the following form (you can specify initial weights by init_w=[[[\(w_0\)]], [[\(w_1\)]], …, [[\(w_n\)]]]) for a single input and output value

\[p(x) = w_0 + w_1 * x + ... + w_n * x^n\]

For multiple i input values and j output values and n-deg polynomial (you can specify initial weights by init_w=[[[\(w_{0, 1, 0}\), \(w_{0, 1, 1}\), …, \(w_{0, 1, j}\)], [\(w_{0, 2, 0}\), \(w_{0, 2, 1}\), …, \(w_{0, 2, j}\)], … [\(w_{0, i, 0}\), \(w_{0, i, 1}\), …, \(w_{0, i, j}\)]], …, [[\(w_{n, 1, 0}\), \(w_{n, 1, 1}\), …, \(w_{n, 1, j}\)], [\(w_{n, 2, 0}\), \(w_{n, 2, 1}\), …, \(w_{n, 2, j}\)], … [\(w_{n, i, 0}\), \(w_{n, i, 1}\), …, \(w_{n, i, j}\)]]])

and the polynomial is as the following form for For multiple i input values and j output values and n-deg polynomial

\[\begin{split}\text{output neurons from 1 to j} = \begin{cases} \begin{split} p_1(x) = \sum\limits_{i=1}^i \Big(w_{0, 1, 0} + w_{1, 1, 1} * x_1 + ... + w_{n, 1, i} * x_i^n \Big) \\ p_2(x) = \sum\limits_{i=1}^i \Big(w_{0, 2, 0} + w_{1, 2, 1} * x_1 + ... + w_{n, 2, i} * x_i^n \Big) \\ p_{...}(x) = \sum\limits_{i=1}^i \Big(\text{......}\Big) \\ p_j(x) = \sum\limits_{i=1}^i \Big(w_{0, j, 0} + w_{1, j, 1} * x_1 + ... + w_{n, j, i} * x_i^n \Big) \\ \end{split} \end{cases}\end{split}\]

PolyFit can be imported by

1from astroNN.nn.layers import PolyFit

And here is an example of usage

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    output = PolyFit(deg=1)(input)
5    return model(inputs=input, outputs=output)

To show it works as a polynomial, you can refer the following example:

 1import numpy as np
 2from astroNN.nn.layers import PolyFit
 3
 4from astroNN.shared.nn_tools import cpu_fallback
 5from tensorflow import keras
 6
 7cpu_fallback()  # force tf to use CPU
 8
 9Input = keras.layers.Input
10Model = keras.models.Model
11
12# Data preparation
13polynomial_coefficient = [0.1, -0.05]
14random_xdata = np.random.normal(0, 3, (100, 1))
15random_ydata = polynomial_coefficient[1] * random_xdata + polynomial_coefficient[0]
16
17input = Input(shape=[1, ])
18# set initial weights
19output = PolyFit(deg=1, use_xbias=False, init_w=[[[0.1]], [[-0.05]]], name='polyfit')(input)
20model = Model(inputs=input, outputs=output)
21
22# predict without training (i.e. without gradient updates)
23np.allclose(model.predict(random_xdata), random_ydata)
24>>> True # True means prediction approx close enough

Mean and Variance Calculation Layer for Bayesian Neural Net

class astroNN.nn.layers.FastMCInferenceMeanVar(*args, **kwargs)[source]

Take mean and variance of the results of a TimeDistributed layer, assuming axis=1 is the timestamp axis

Returns

A layer

Return type

object

History
2018-Feb-02 - Written - Henry Leung (University of Toronto)
2018-Apr-13 - Update - Henry Leung (University of Toronto)
call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

If you wnat fast MC inference on GPU and you are using keras models, you should just use FastMCInference.

FastMCInferenceMeanVar is a layer designed to be used with Bayesian Neural Network with Dropout Variational Inference. FastMCInferenceMeanVar should be used with FastMCInference in general. The advantage of FastMCInferenceMeanVar layer is you can copy the data and calculate the mean and variance on GPU (if any) when you are doing dropout variational inference.

FastMCInferenceMeanVar can be imported by

1from astroNN.nn.layers import FastMCInferenceMeanVar

And here is an example of usage

 1def keras_model():
 2    # Your keras_model define here, assuming you are using functional API
 3    input = Input(.....)
 4    monte_carlo_dropout = FastMCInference(mc_num_here)
 5    # some layer here, you should use MCDropout from astroNN instead of Dropout from Tensorflow:)
 6    result_mean_var = FastMCInferenceMeanVar()(previous_layer_here)
 7    return model
 8
 9model.compile(loss=loss_func_here, optimizer=optimizer_here)
10
11# Use the model to predict
12output = model.predict(x)
13
14# with dropout variational inference
15# prediction and model uncertainty (variance) from the model
16mean = output[0]
17variance = output[1]

Repeat Vector Layer for Bayesian Neural Net

class astroNN.nn.layers.FastMCRepeat(*args, **kwargs)[source]

Prepare data to do inference, Repeats the input n times at axis=1

Parameters

n (int) – Number of Monte Carlo integration

Returns

A layer

Return type

object

History
2018-Feb-02 - Written - Henry Leung (University of Toronto)
2018-Apr-13 - Update - Henry Leung (University of Toronto)
call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer which is the repeated Tensor

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

If you wnat fast MC inference on GPU and you are using keras models, you should just use FastMCInference.

FastMCRepeat is a layer to repeat training data to do Monte Carlo integration required by Bayesian Neural Network.

FastMCRepeat is a layer designed to be used with Bayesian Neural Network with Dropout Variational Inference. FastMCRepeat should be used with FastMCInferenceMeanVar in general. The advantage of FastMCRepeat layer is you can copy the data and calculate the mean and variance on GPU (if any) when you are doing dropout variational inference.

FastMCRepeat can be imported by

1from astroNN.nn.layers import FastMCRepeat

And here is an example of usage

 1def keras_model():
 2    # Your keras_model define here, assuming you are using functional API
 3    input = Input(.....)
 4    monte_carlo_dropout = FastMCRepeat(mc_num_here)
 5    # some layer here, you should use MCDropout from astroNN instead of Dropout from Tensorflow:)
 6    result_mean_var = FastMCInferenceMeanVar()(previous_layer_here)
 7    return model
 8
 9model.compile(loss=loss_func_here, optimizer=optimizer_here)
10
11# Use the model to predict
12output = model.predict(x)
13
14# with dropout variational inference
15# prediction and model uncertainty (variance) from the model
16mean = output[0]
17variance = output[1]

Fast Monte Carlo Integration Layer for Keras Model

class astroNN.nn.layers.FastMCInference(n, **kwargs)[source]

Turn a model for fast Monte Carlo (Dropout, Flipout, etc) Inference on GPU

Parameters

n (int) – Number of Monte Carlo integration

Returns

A layer

Return type

object

History
2018-Apr-13 - Written - Henry Leung (University of Toronto)
2021-Apr-14 - Updated - Henry Leung (University of Toronto)
__call__(model)[source]
Parameters

model (Union[keras.Model, keras.Sequential]) – Keras model to be accelerated

Returns

Accelerated Keras model

Return type

Union[keras.Model, keras.Sequential]

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

FastMCInference is a layer designed for fast Monte Carlo Inference on GPU. One of the main challenge of MC integration on GPU is you want the data stay on GPU and you do MC integration on GPU entirely, moving data from drives to GPU is a very expensive operation. FastMCInference will create a new keras model such that it will replicate data on GPU, do Monte Carlo integration and calculate mean and variance on GPU, and get back the result.

Benchmark (Nvidia GTX1060 6GB): 98,000 7514 pixles APOGEE Spectra, traditionally the 25 forward pass spent ~270 seconds, by using FastMCInference, it only spent ~65 seconds to do the exact same task.

It can only be used with Keras model. If you are using customised model purely with Tensorflow, you should use FastMCRepeat and FastMCInferenceMeanVar

You can import the function from astroNN by

 1from astroNN.nn.layers import FastMCInference
 2
 3# keras_model is your keras model with 1 output which is a concatenation of labels prediction and predictive variance
 4keras_model = Model(....)
 5
 6# fast_mc_model is the new keras model capable to do fast monte carlo integration on GPU
 7fast_mc_model = FastMCInference(keras_model)
 8
 9# You can just use keras API with the new model such as
10result = fast_mc_model.predict(.....)
11
12# here is the result dimension
13predictions = result[:, :(result.shape[1] // 2), 0]  # mean prediction
14mc_dropout_uncertainty = result[:, :(result.shape[1] // 2), 1] * (self.labels_std ** 2)  # model uncertainty
15predictions_var = np.exp(result[:, (result.shape[1] // 2):, 0]) * (self.labels_std ** 2)  # predictive uncertainty

Gradient Stopping Layer

class astroNN.nn.layers.StopGrad(*args, **kwargs)[source]

Stop gradient backpropagation via this layer during training, act as an identity layer during testing by default.

Parameters

always_on (bool) – Default False which will on during train time and off during test time. True to enable it in every situation

Returns

A layer

Return type

object

History

2018-May-23 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer which is just the original tensor

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

It uses tf.stop_gradient and acts as a Keras layer.

StopGrad can be imported by

1from astroNN.nn.layers import StopGrad

It can be used with keras or tensorflow.keras, you just have to import the function from astroNN

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    # some layers ...
5    stopped_grad_layer = StopGrad()(...)
6    # some layers ...
7    return model

For example, if you have a model with multiple branches and you only want error backpropagate to one but not the other,

 1from astroNN.nn.layers import StopGrad
 2# we use zeros loss just to demonstrate StopGrad works and no error backprop from StopGrad layer
 3from astroNN.nn.losses import zeros_loss
 4import numpy as np
 5from astroNN.shared.nn_tools import cpu_fallback
 6from tensorflow import keras
 7
 8cpu_fallback()  # force tf to use CPU
 9
10Input = keras.layers.Input
11Dense = keras.layers.Dense
12concatenate = keras.layers.concatenate
13Model = keras.models.Model
14
15# Data preparation
16random_xdata = np.random.normal(0, 1, (100, 7514))
17random_ydata = np.random.normal(0, 1, (100, 25))
18input2 = Input(shape=[7514])
19dense1 = Dense(100, name='normaldense')(input2)
20dense2 = Dense(25, name='wanted_dense')(input2)
21dense2_stopped = StopGrad(name='stopgrad', always_on=True)(dense2)
22output2 = Dense(25, name='wanted_dense2')(concatenate([dense1, dense2_stopped]))
23model2 = Model(inputs=input2, outputs=[output2, dense2])
24model2.compile(optimizer=keras.optimizers.SGD(lr=0.1),
25               loss={'wanted_dense2': 'mse', 'wanted_dense': zeros_loss})
26weight_b4_train = model2.get_layer(name='wanted_dense').get_weights()[0]
27weight_b4_train2 = model2.get_layer(name='normaldense').get_weights()[0]
28model2.fit(random_xdata, [random_ydata, random_ydata])
29weight_a4_train = model2.get_layer(name='wanted_dense').get_weights()[0]
30weight_a4_train2 = model2.get_layer(name='normaldense').get_weights()[0]
31
32print(np.all(weight_b4_train == weight_a4_train))
33>>> True  # meaning all the elements from Dense with StopGrad layer are equal due to no gradient update
34print(np.all(weight_b4_train2 == weight_a4_train2))
35>>> False  # meaning not all the elements from normal Dense layer are equal due to gradient update

Boolean Masking Layer

class astroNN.nn.layers.BoolMask(*args, **kwargs)[source]

Boolean Masking layer, please notice it is best to flatten input before using BoolMask

Parameters

mask (np.ndarray) – numpy boolean array as a mask for incoming tensor

Returns

A layer

Return type

object

History

2018-May-28 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer which is just the masked tensor

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

BoolMask takes numpy boolean array as layer initialization and mask the input tensor.

BoolMask can be imported by

1from astroNN.nn.layers import BoolMask

It can be used with keras or tensorflow.keras, you just have to import the function from astroNN

1def keras_model():
2    # Your keras_model define here, assuming you are using functional API
3    input = Input(.....)
4    # some layers ...
5    stopped_grad_layer = BoolMask(mask=....)(...)
6    # some layers ...
7    return model

TensorInput Layer

class astroNN.nn.layers.TensorInput(*args, **kwargs)[source]

TensorInput layer

Parameters

tensor (tf.Tensor) – tensor, usually is a tensor generating random number

Returns

A layer

Return type

object

History

2020-May-3 - Written - Henry Leung (University of Toronto)

call(inputs, training=None)[source]
Note

Equivalent to __call__()

Parameters

inputs (tf.Tensor) – Tensor to be applied

Returns

Tensor after applying the layer which is just the masked tensor

Return type

tf.Tensor

get_config()[source]
Returns

Dictionary of configuration

Return type

dict

TensorInput takes tensorflow tensor as layer initialization and return the tensor.

TensorInput can be imported by

1from astroNN.nn.layers import TensorInput

For example, if you want to generate random tensor as other layers input and do not want it to register it as model input, you can

 1from astroNN.nn.layers import TensorInput
 2# we use zeros loss just to demonstrate StopGrad works and no error backprop from StopGrad layer
 3from astroNN.nn.losses import zeros_loss
 4import numpy as np
 5from astroNN.shared.nn_tools import cpu_fallback
 6import tensorflow as tf
 7from tensorflow import keras
 8
 9cpu_fallback()  # force tf to use CPU
10
11Input = keras.layers.Input
12Dense = keras.layers.Dense
13concatenate = keras.layers.concatenate
14Model = keras.models.Model
15
16# Data preparation
17random_xdata = np.random.normal(0, 1, (100, 7514))
18random_ydata = np.random.normal(0, 1, (100, 25))
19input1 = Input(shape=[7514])
20input2 = TensorInput(tensor=tf.random.normal(mean=0., stddev=1., shape=tf.shape(input1)))([])
21output = Dense(25, name='dense')(concatenate([input1, input2]))
22model = Model(inputs=input1, outputs=output)
23model.compile(optimizer=keras.optimizers.SGD(lr=0.1),
24              loss='mse')
25print(model.input_names)
26>>> ['input_1']  # only input_1 as input_2 is not really an input we requiring user to input