This example split-trains a neural network to recognize images of numbers on
the MNIST dataset of handwritten digits.

First, the MNIST dataset of 60,000 images is partitioned into two independent
datasets of 30,000.  Each of these datasets
is saved into two Package files, a specially structured zip archive file which
holds the images along with labels for training.

Each Package file is placed on an independent Access Point.  These represent
two independent organizations with similar data who wish to jointly build a
network without sharing their data.

A neural network structure is defined, including a split point.  The split
training process begins, alternating between Access Points.  At the end of the
process, Organization One -- who initiated the training -- owns the trained
neural network.  The network resides on their Access Point.  At the end of
training this network is also pulled down locally as a PyTorch .pth file.

The Decorrelation Loss function is also demonstrated here.  This loss function
limits the ability to reconstruct training data from a final or partially
trained model.  This allows for collaboration even between untrusting parties.
Without a protection like this, a reconstruction attack can reverse engineer
the training data as illustrated in (https://arxiv.org/pdf/1904.01067.pdf).
Similar attacks can be used to reconstruct data from even complex models such
as DeepFace (see https://arxiv.org/pdf/1703.00832.pdf).

The final scripts demonstrate using the trained network for inference.  A
local inference uses the .pth file to categorize two images of handwritten
digits.  The FED inference performs classification securely by moving the
algorithm to one organization's Access point to execute on their data.  The
last examples use SMPC for the highest level of secure computation, protecting
both the algorithm and the data during processing.
