Intro

A combination of neural network layers form a Functional.

Mathematically, a functional is a general mapping from input set \(X\) onto some output set \(Y\). Once the parameters of this transformation are found, this mapping is called a function.

Functionals are needed to form SciModels.

A Functional is a class to form complex architectures (mappings) from inputs (Variables) to the outputs.

from sciann import Variable, Functional

x = Variable('x')
y = Variable('y')

Fxy = Functional('Fxy', [x, y], 
                 hidden_layers=[10, 20, 10],
                 activation='tanh')

Functionals can be plotted when a SciModel is formed. A minimum of one Constraint is needed to form the SciModel

from sciann.constraints import Data
from sciann import SciModel

model = SciModel(x, Data(Fxy), 
                 plot_to_file='output.png')

[source]

MLPFunctional

sciann.functionals.mlp_functional.MLPFunctional(inputs, outputs, layers)

Configures the Functional object (Neural Network).

Arguments

  • fields: String or Field. [Sub-]Network outputs. It can be of type String - Associated fields will be created internally. It can be of type Field or Functional
  • variables: Variable. [Sub-]Network inputs. It can be of type Variable or other Functional objects.
  • hidden_layers: A list indicating neurons in the hidden layers. e.g. [10, 100, 20] is a for hidden layers with 10, 100, 20, respectively.
  • activation: defaulted to "tanh". Activation function for the hidden layers. Last layer will have a linear output.
  • output_activation: defaulted to "linear". Activation function to be applied to the network output.
  • res_net: (True, False). Constructs a resnet architecture. Defaulted to False.
  • kernel_initializer: Initializer of the Kernel, from k.initializers.
  • bias_initializer: Initializer of the Bias, from k.initializers.
  • kernel_regularizer: Regularizer for the kernel. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • bias_regularizer: Regularizer for the bias. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • dtype: data-type of the network parameters, can be ('float16', 'float32', 'float64'). Note: Only network inputs should be set.
  • trainable: Boolean. False if network is not trainable, True otherwise. Default value is True.

Raises

  • ValueError:
  • TypeError:

[source]

Variable

sciann.functionals.variable.Variable(name=None, units=1, tensor=None, dtype=None)

Configures the Variable object for the network's input.

Arguments

  • name: String. Required as derivatives work only with layer names.
  • units: Int. Number of feature of input var.
  • tensor: Tensorflow Tensor. Can be pass as the input path.
  • dtype: data-type of the network parameters, can be ('float16', 'float32', 'float64').

Raises


[source]

Field

sciann.functionals.field.Field(name=None, units=1, activation=<function linear at 0x7f87707bc8b0>, kernel_initializer=None, bias_initializer=None, kernel_regularizer=None, bias_regularizer=None, trainable=True, use_bias=True, dtype=None)

Configures the Field class for the model outputs.

Arguments

  • name: String. Assigns a layer name for the output.
  • units: Positive integer. Dimension of the output of the network.
  • activation: Callable. A callable object for the activation.
  • kernel_initializer: Initializer for the kernel. Defaulted to a normal distribution.
  • bias_initializer: Initializer for the bias. Defaulted to a normal distribution.
  • kernel_regularizer: Regularizer for the kernel. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • bias_regularizer: Regularizer for the bias. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • trainable: Boolean to activate parameters of the network.
  • use_bias: Boolean to add bias to the network.
  • dtype: data-type of the network parameters, can be ('float16', 'float32', 'float64').

Raises


[source]

Parameter

sciann.functionals.parameter.Parameter(val=1.0, min_max=None, inputs=None, name=None, non_neg=None)

Parameter functional to be used for parameter inversion. Inherited from Dense layer.

Arguments

  • val: float. Initial value for the parameter.
  • min_max: [MIN, MAX]. A range to constrain the value of parameter. This constraint will overwrite non_neg constraint if both are chosen.
  • inputs: Variables. List of Variables to the parameters.
  • name: str. A name for the Parameter layer.
  • non_neg: boolean. True (default) if only non-negative values are expected.
  • **kwargs: keras.layer.Dense accepted arguments.

eval

eval()

Evaluates the functional object for a given input.

Arguments

(SciModel, Xs): Evalutes the functional object from the beginning of the graph defined with SciModel. The Xs should match those of SciModel.

(Xs): Evaluates the functional object from inputs of the functional. Xs should match those of inputs to the functional.

Returns

Numpy array of dimensions of network outputs.

Raises

  • ValueError:
  • TypeError:

get_weights

get_weights(at_layer=None)

Get the weights and biases of different layers.

Arguments

  • at_layer: Get the weights of a specific layer.

Returns

List of numpy array.


set_weights

set_weights(weights)

Set the weights and biases of different layers.

Arguments

  • weights: Should take the dimensions as the output of ".get_weights"

Returns


reinitialize_weights

reinitialize_weights()

Re-initialize the weights and biases of a functional object.

Arguments

Returns


count_params

count_params()

Total number of parameters of a functional.

Arguments

Returns

Total number of parameters.


set_trainable

set_trainable(val, layers=None)

Set the weights and biases of a functional object trainable or not-trainable. Note: The SciModel should be called after this.

Arguments

  • val: (Ture, False)
  • layers: list of layers to be set trainable or non-trainable. defaulted to None.

Returns


split

split()

In the case of Functional with multiple outputs, you can split the outputs and get an associated functional.

Returns

(f1, f2, ...): Tuple of splitted Functional objects associated to each output.