Intro
SciModel is similar to Keras' Model, prepared to make scientific model creation effortless. 
The inputs are of Variable objects, and the outputs are Target objects.
As an example:  
from sciann import Variable, Functional, SciModel, Data
x = Variable('x')
y = Variable('y')
Fxy = Functional('Fxy', [x, y], 
                 hidden_layers=[10, 20, 10],
                 activation='tanh')
model = SciModel([x,y], Data(Fxy))
SciModel can be trained by calling model.train. 
training_history = model.train([X_data, Y_data], Fxy_data)
training_history object records the loss for each epoch as well as other parameters. 
Check Keras' documentation for more details.   
SciModel object also provides functionality such as predict and save.  
SciModel
sciann.models.model.SciModel(inputs=None, targets=None, loss_func='mse', optimizer='adam', load_weights_from=None, plot_to_file=None)
Configures the model for training. Example: Arguments
- inputs: Main variables (also called inputs, or independent variables) of the network, xs. They all should be of typeVariable.
- targets: list all targets (also called outputs, or dependent variables)
    to be satisfied during the training. Expected list members are:- Entries of type Constraint, such as Data, Tie, etc.
- Entries of type Functionalcan be: . A singleFunctional: will be treated as a Data constraint. The object can be just aFunctionalor any derivatives ofFunctionals. An example is a PDE that is supposed to be zero. . A tuple of (Functional,Functional): will be treated as aConstraintof typeTie.
- If you need to impose more complex types of constraints or
    to impose a constraint partially in a specific part of region,
    use DataorTieclasses fromConstraint.
 
- Entries of type 
- loss_func: defaulted to "mse" or "mean_squared_error". It can be an string from supported loss functions, i.e. ("mse" or "mae"). Alternatively, you can create your own loss function and pass the function handle (check Keras for more information).
- 
optimizer: defaulted to "adam" optimizer. It can be one of Keras accepted optimizers, e.g. "adam". You can also pass more details on the optimizer: - optimizer = k.optimizers.RMSprop(lr=0.01, rho=0.9, epsilon=None, decay=0.0)
- optimizer = k.optimizers.SGD(lr=0.001, momentum=0.0, decay=0.0, nesterov=False)
- optimizer = k.optimizers.Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
 Check our Keras documentation for further details. We have found 
- 
load_weights_from: (file_path) Instantiate state of the model from a previously saved state. 
- plot_to_file: A string file name to output the network architecture.
Raises
- ValueError: inputsmust be of type Variable.targetsmust be of typesFunctional, or (Functional, data), or (Functional,Functional).
train
train(x_true, y_true=None, weights=None, target_weights=None, batch_size=64, epochs=100, learning_rate=0.001, adaptive_weights=None, adaptive_sample_weights=None, log_loss_gradients=None, shuffle=True, callbacks=[], stop_lr_value=1e-08, reduce_lr_after=None, reduce_lr_min_delta=0.0, stop_after=None, stop_loss_value=1e-08, log_parameters=None, log_functionals=None, log_loss_landscape=None, save_weights=None, default_zero_weight=0.0, validation_data=None)
Performs the training on the model.
Arguments
- x_true: list of Xsassociated to targets ofY. Alternatively, you can pass a Sequence object (keras.utils.Sequence). Expecting a list of np.ndarray of size (N,1) each, with N as the sample size.
- y_true: list of true Ysassociated to the targets defined during model setup. Expecting the same size as list of targets defined inSciModel.- To impose the targets at specific Xsonly, pass a tuple of(ids, y_true)for that target.
 
- To impose the targets at specific 
- weights: (np.ndarray) A global sample weight to be applied to samples.
    Expecting an array of shape (N,1), with N as the sample size.
    Default value is oneto consider all samples equally important.
- target_weights: (list) A weight for each target defined in y_true.
- batch_size: (Integer) or 'None'. Number of samples per gradient update. If unspecified, 'batch_size' will default to 2^6=64.
- epochs: (Integer) Number of epochs to train the model.
    Defaulted to 100.
    An epoch is an iteration over the entire xandydata provided.
- learning_rate: (Tuple/List) (epochs, lrs). Expects a list/tuple with a list of epochs and a list or learning rates. It linearly interpolates between entries. Defaulted to 0.001 with no decay. Example: learning_rate = ([0, 100, 1000], [0.001, 0.0005, 0.00001])
- shuffle: Boolean (whether to shuffle the training data). Default value is True.
- adaptive_weights: Pass a Dict with the following keys: . method: GP or NTK. . freq: Freq to update the weights. . log_freq: Freq to log the weights and gradients in the history object. . beta: The beta parameter in from Gradient Pathology paper.
- log_loss_gradients: Pass a Dict with the following keys: . freq: Freq of logs. Defaulted to 100. . path: Path to log the gradients.
- callbacks: List of keras.callbacks.Callbackinstances.
- reduce_lr_after: patience to reduce learning rate or stop after certain missed epochs. Defaulted to epochs max(10, epochs/10).
- stop_lr_value: stop the training if learning rate goes lower than this value. Defaulted to 1e-8.
- reduce_lr_min_delta: min absolute change in total loss value that is considered a successful change. Defaulted to 0.001. This values affects number of failed attempts to trigger reduce learning rate based on reduce_lr_after.
- stop_after: To stop after certain missed epochs. Defaulted to total number of epochs.
- stop_loss_value: The minimum value of the total loss that stops the training automatically. Defaulted to 1e-8.
- log_parameters: Dict object expecting the following keys: . parameters: pass list of parameters. . freq: pass freq of outputs.
- log_functionals: Dict object expecting the following keys: . functionals: List of functionals to log their training history. . inputs: The input grid to evaluate the value of each functional. Should be of the same size as the inputs to the model.train. . path: Path to the location that the csv files will be logged. . freq: Freq of logging the functionals.
- log_loss_landscape: Dict object expecting the following arguments: . norm: defaulted to 2. . resolution: defaulted to 10. . path: Path to the location that the csv files will be logged. . freq: Freq of logging the loss landscape.
- save_weights: (dict) Dict object expecting the following information:
    . path: defaulted to the current path.
    . freq: freq of calling CheckPoint callback.
    . best: If True, only saves the best loss, otherwise save all weights at every freqepochs.
- save_weights_to: (file_path) If you want to save the state of the model (at the end of the training).
- save_weights_freq: (Integer) Save weights every N epcohs. Defaulted to 0.
- default_zero_weight: a small number for zero sample-weight.
Returns
A Keras 'History' object after performing fitting.
predict
predict(xs, batch_size=None, verbose=0, steps=None)
Predict output from network.
Arguments
- xs: list of Xsassociated model. Expecting a list of np.ndarray of size (N,1) each, with N as the sample size.
- batch_size: defaulted to None. Check Keras documentation for more information.
- verbose: defaulted to 0 (None). Check Keras documentation for more information.
- steps: defaulted to 0 (None). Check Keras documentation for more information.
Returns
List of numpy array of the size of network outputs.
Raises
ValueError if number of xss is different from number of inputs.
loss_functions
loss_function)
loss_function returns the callable object to evaluate the loss.
Arguments
- method: String.
- "mse" for Mean Squared Erroror
- "mae" for Mean Absolute Erroror
- "se" for Squared Erroror
- "ae" for Absolute Erroror
- "sse" for Squared Sum Erroror
- "ase" for Absolute Sum Erroror
- "rse" for Reduce Sum Error.
Returns
Callable function that gets (y_true, y_pred) as the input and returns the loss value as the output.
Raises
ValueError if anything other than "mse" or "mae" is passed.