Intro

A combination of neural network layers form a Functional.

Mathematically, a functional is a general mapping from input set \(X\) onto some output set \(Y\). Once the parameters of this transformation are found, this mapping is called a function.

Functionals are needed to form SciModels.

A Functional is a class to form complex architectures (mappings) from inputs (Variables) to the outputs.

from sciann import Variable, Functional

x = Variable('x')
y = Variable('y')

Fxy = Functional('Fxy', [x, y], 
                 hidden_layers=[10, 20, 10],
                 activation='tanh')

Functionals can be plotted when a SciModel is formed. A minimum of one Constraint is needed to form the SciModel

from sciann.constraints import Data
from sciann import SciModel

model = SciModel(x, Data(Fxy), 
                 plot_to_file='output.png')

[source]

Functional

sciann.functionals.functional.Functional(fields=None, variables=None, hidden_layers=None, activation='tanh', output_activation='linear', kernel_initializer=<keras.initializers.VarianceScaling object at 0x7f7e789ff550>, bias_initializer=<keras.initializers.RandomUniform object at 0x7f7e789ff610>, kernel_regularizer=None, bias_regularizer=None, dtype=None, trainable=True)

Configures the Functional object (Neural Network).

Arguments

  • fields: String or Field. [Sub-]Network outputs. It can be of type String - Associated fields will be created internally. It can be of type Field or Functional
  • variables: Variable. [Sub-]Network inputs. It can be of type Variable or other Functional objects.
  • hidden_layers: A list indicating neurons in the hidden layers. e.g. [10, 100, 20] is a for hidden layers with 10, 100, 20, respectively.
  • activation: defaulted to "tanh". Activation function for the hidden layers. Last layer will have a linear output.
  • output_activation: defaulted to "linear". Activation function to be applied to the network output.
  • kernel_initializer: Initializer of the Kernel, from k.initializers.
  • bias_initializer: Initializer of the Bias, from k.initializers.
  • kernel_regularizer: Regularizer for the kernel. By default, it uses l1=0.001 and l2=0.001 regularizations. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • bias_regularizer: Regularizer for the bias. By default, it uses l1=0.001 and l2=0.001 regularizations. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • dtype: data-type of the network parameters, can be ('float16', 'float32', 'float64'). Note: Only network inputs should be set.
  • trainable: Boolean. False if network is not trainable, True otherwise. Default value is True.

Raises

  • ValueError:
  • TypeError:

[source]

Variable

sciann.functionals.variable.Variable(name=None, units=1, tensor=None, dtype=None)

Configures the Variable object for the network's input.

Arguments

  • name: String. Required as derivatives work only with layer names.
  • units: Int. Number of feature of input var.
  • tensor: Tensorflow Tensor. Can be pass as the input path.
  • dtype: data-type of the network parameters, can be ('float16', 'float32', 'float64').

Raises


[source]

Field

sciann.functionals.field.Field(name=None, units=1, activation=<function linear at 0x7f7ee822f170>, kernel_initializer=<keras.initializers.VarianceScaling object at 0x7f7e789ff210>, bias_initializer=<keras.initializers.RandomUniform object at 0x7f7e789ff310>, kernel_regularizer=None, bias_regularizer=None, trainable=True, dtype=None)

Configures the Field class for the model outputs.

Arguments

  • name: String. Assigns a layer name for the output.
  • units: Positive integer. Dimension of the output of the network.
  • activation: Callable. A callable object for the activation.
  • kernel_initializer: Initializer for the kernel. Defaulted to a normal distribution.
  • bias_initializer: Initializer for the bias. Defaulted to a normal distribution.
  • kernel_regularizer: Regularizer for the kernel. By default, it uses l1=0.001 and l2=0.001 regularizations. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • bias_regularizer: Regularizer for the bias. By default, it uses l1=0.001 and l2=0.001 regularizations. To set l1 and l2 to custom values, pass [l1, l2] or {'l1':l1, 'l2':l2}.
  • trainable: Boolean to activate parameters of the network.
  • dtype: data-type of the network parameters, can be ('float16', 'float32', 'float64').

Raises


[source]

Parameter

sciann.functionals.parameter.Parameter(val=1.0, min_max=None, inputs=None, name=None, non_neg=None)

Parameter functional to be used for parameter inversion. Inherited from Dense layer.

Arguments

  • val: float. Initial value for the parameter.
  • min_max: [MIN, MAX]. A range to constrain the value of parameter. This constraint will overwrite non_neg constraint if both are chosen.
  • inputs: Variables. List of Variables to the parameters.
  • name: str. A name for the Parameter layer.
  • non_neg: boolean. True (default) if only non-negative values are expected.
  • **kwargs: keras.layer.Dense accepted arguments.