This example uses the Santander Customer Transaction Prediction data, from
https://www.kaggle.com/c/santander-customer-transaction-prediction/data.  This
is a 200,000 record, 200 column dataset.  The dataset is stripped of the string
ID_code, scaled and split into two parts for training (180,000 records) and
testing (10% or 20,000 records).

The training set is again split into three equal parts of 60,000 records and
saved in Python pickle files under three fictional bank names -- SAN.pkl for
organization-one, JPM.pkl for organization-two, and PNB.pkl for
organization-three.  The column named "target" is the "y" label, and the rest
of the columns get stored under the "x" data in the pickled structure.

Each of the .pkl files are placed on an independent Access Point representing
the three private bank infrastructures.

The SAN organization defines a neural network and begins a split-training,
requesting the participation of JPM and PNB.  Once the other two organizations
agree to participate, the three Access Points work together to train the
network.

At the end of the training, organization-one (SAN) has the trained network as
an asset on its Access Point is able to extract it for local inference.

The final two scripts illustrate running that network against the reserved
testing data.  First it is run as a local PyTorch model.  Next it gets run by
organization-two (JPM) as a network inference using the SAN Access Point.
