This example split-trains a neural network using TripleBlind Federated Learning
to recognize images of numbers on the MNIST dataset of handwritten digits.

First, the MNIST dataset of 60,000 images is partitioned into two independent
datasets of 30,000.  Each of these datasets
is saved into two Package files, a specially structured zip archive file which
holds the images along with labels for training.

Each Package file is placed on an independent Access Point.  These represent
two independent organizations with similar data who wish to jointly build a
network without sharing their data.

A neural network structure is defined without a split layer. The Federated
Learning process begins, alternating between Access Points. At the end of the
process, Organization One -- who initiated the training -- owns the trained
neural network.  The network resides on their Access Point.  At the end of
training this network is also pulled down locally as a PyTorch .pth file.
'federated_rounds' parameter will be used to determine
how many times each model will train the specified number of epochs. The model
weights will be aggregated between federated rounds.

The final scripts demonstrate using the trained network for inference.  A
local inference uses the .pth file to categorize two images of handwritten
digits.  The FED inference performs classification securely by moving the
algorithm to Organization Two's Access point to execute on their data.  The
last examples use SMPC for the highest level of secure computation, protecting
both the algorithm and the data during processing.
