This example split-trains a neural network to generate text given a sample
document.

First, a sample document is processed into training and target formats which
can be used with a LSTM network. This means that the input structure will be in
a numpy array one hot encoded with dimensions [batch size, sequence, total
unique fields in text] and target in numpy array with dimensions
[batchsize * sequence]. The sequence length in this example will be the length
of the shortest paragraph. After creation of these numpy arrays, the data will
be placed into a TripleBlind Package format.

Package file is uploaded to organization 2's access point and a new asset is
created on the TripleBlind Router which will be used in computations later.

A neural network structure is defined, including a split point.  The split
training process begins between the creator of the model and the owner of the
Package. At the end of the process, Organization One -- who initiated the
training -- owns the trained neural network.  The network resides on their
Access Point.

The final scripts demonstrate using the trained network for inference. The
input data is preprocessed into a similar format which was used during training
(see dimensions above). Both inference jobs will run over loop until the total
number of characters which the user specifies is reached. For example, if the
max_length is set to 10 and the starting sequence is 'arm', then the scripts
will run until 'arm holdin' is reached.

The FED inference performs classification securely by moving the algorithm to
one organization's Access point to execute on the new data uploaded in the job.

The last example uses SMPC for the highest level of secure computation,
protecting both the algorithm and the data during processing.
