• Home
  • User Documentation
  • About
  • More
    • Funding
    • News
    • Contributors
    • Users
    • Roadmap
    • Contact Us
  • Home
  • User Documentation
  • About
  • More
    • Funding
    • News
    • Contributors
    • Users
    • Roadmap
    • Contact Us
  • Getting Started
    • What's Fed-BioMed
    • Fedbiomed Architecture
    • Fedbiomed Workflow
    • Installation
    • Basic Example
    • Configuration
  • Tutorials
    • PyTorch
      • PyTorch MNIST Basic Example
      • How to Create Your Custom PyTorch Training Plan
      • PyTorch Used Cars Dataset Example
      • PyTorch aggregation methods in Fed-BioMed
      • Transfer-learning in Fed-BioMed tutorial
    • MONAI
      • Federated 2d image classification with MONAI
      • Federated 2d XRay registration with MONAI
    • Scikit-Learn
      • MNIST classification with Scikit-Learn Classifier (Perceptron)
      • Fed-BioMed to train a federated SGD regressor model
      • Implementing other Scikit Learn models for Federated Learning
    • Optimizers
      • Advanced optimizers in Fed-BioMed
    • FLamby
      • General Concepts
      • FLamby integration in Fed-BioMed
    • Advanced
      • In Depth Experiment Configuration
      • PyTorch model training using a GPU
      • Breakpoints
    • Security
      • Using Differential Privacy with OPACUS on Fed-BioMed
      • Local and Central DP with Fed-BioMed: MONAI 2d image registration
      • Training Process with Training Plan Management
      • Training with Secure Aggregation
      • End-to-end Privacy Preserving Training and Inference on Medical Data
    • Biomedical data
      • Brain Segmentation
  • User Guide
    • Glossary
    • Deployment
      • Introduction
      • VPN Deployment
      • Network matrix
      • Security model
    • Node
      • Configuring Nodes
      • Deploying Datasets
      • Training Plan Management
      • Using GPU
      • Node GUI
    • Researcher
      • Training Plan
      • Training Data
      • Experiment
      • Aggregation
      • Listing Datasets and Selecting Nodes
      • Model Validation on the Node Side
      • Tensorboard
    • Optimization
    • Secure Aggregation
      • Introduction
      • Configuration
      • Managing Secure Aggregation in Researcher
  • Developer
    • API Reference
      • Common
        • Certificate Manager
        • CLI
        • Config
        • Constants
        • Data
        • DB
        • Exceptions
        • IPython
        • Json
        • Logger
        • Message
        • Metrics
        • Model
        • MPC controller
        • Optimizers
        • Privacy
        • Secagg
        • Secagg Manager
        • Serializer
        • Singleton
        • Synchro
        • TasksQueue
        • TrainingPlans
        • TrainingArgs
        • Utils
        • Validator
      • Node
        • CLI
        • CLI Utils
        • Config
        • DatasetManager
        • HistoryMonitor
        • Node
        • NodeStateManager
        • Requests
        • Round
        • Secagg
        • Secagg Manager
        • TrainingPlanSecurityManager
      • Researcher
        • Aggregators
        • CLI
        • Config
        • Datasets
        • Federated Workflows
        • Filetools
        • Jobs
        • Monitor
        • NodeStateAgent
        • Requests
        • Secagg
        • Strategies
      • Transport
        • Client
        • Controller
        • NodeAgent
        • Server
    • Usage and Tools
    • Continuous Integration
    • Definition of Done
    • Development Environment
    • Testing in Fed-BioMed
    • RPC Protocol and Messages
  • FAQ & Troubleshooting
Download Notebook

Training with Secure Aggregation¶

Secure aggregation is one of the security feature that is provided by Fed-BioMed. Please refer to secure aggregation user guide for more information regarding the methods and techniques that are used. This tutorial gives an example of secure aggregation usage in Fed-BioMed.

Setting up the nodes¶

During the tutorial, nodes and researcher will be launched locally using single clone of Fed-BioMed. However, it is also possible to execute notebook cells when the components are configured remotely by respecting following instruction.

Configuring/Installing Element for Secure Aggregation¶

Fed-BioMed provides two secure aggregation schemes: LOM and Joye-Libert. While LOM doesn't require configuration or extra installation. Joye-Libert depends on third-party modules and certificate configuration after a basic installation of Fed-BioMed.

You can follow the detailed instructions for configuring Fed-BioMed instance for secure aggregation or apply following shortened instructions for a basic setup for Joye-Libert.

1. Create node and researcher instances¶

1.1 Create nodes¶

It is mandatory to have at least two nodes for the experiment that requires secure aggregation. Please execute following commands to create two nodes.

Node 1:

fedbiomed component create -c node  --path my-node

Node 2:

fedbiomed component create -c node  --path my-second-node
1.2 Create researcher¶

Please run the command below to create researcher component.

fedbiomed component create --component researcher

Please follow these instructions if you activate Joye-Libert secure aggregation: Joye-Libert configuration requires to know the participating Fed-BioMed components in advance. Therefore, each component that will participate in the training should be created before starting them. Afterwards, participating components can be registered in every other component.

2. Add dataset and start nodes¶

The next step will be adding/deploying MNIST dataset in the nodes and starting them. For this step you can follow the instructions for adding dataset into nodes to add MNIST dataset. After the datasets are deployed you can start the nodes and researcher.

For MNIST dataets, commands are:

fedbiomed node --path my-node dataset add --mnist
fedbiomed node --path my-second-node dataset add --mnist

Define an experiment model and parameters"¶

Declare a torch training plan MyTrainingPlan class to send for training on the node

In [ ]:
Copied!
import torch
import torch.nn as nn
from fedbiomed.common.training_plans import TorchTrainingPlan
from fedbiomed.common.data import DataManager
from torchvision import datasets, transforms


# Here we define the model to be used. 
# You can use any class name (here 'Net')
class MyTrainingPlan(TorchTrainingPlan):
    
    # Defines and return model 
    def init_model(self, model_args):
        return self.Net(model_args = model_args)
    
    # Defines and return optimizer
    def init_optimizer(self, optimizer_args):
        return torch.optim.Adam(self.model().parameters(), lr = optimizer_args["lr"])
    
    # Declares and return dependencies
    def init_dependencies(self):
        deps = ["from torchvision import datasets, transforms"]
        return deps
    
    class Net(nn.Module):
        def __init__(self, model_args):
            super().__init__()
            self.conv1 = nn.Conv2d(1, 32, 3, 1)
            self.conv2 = nn.Conv2d(32, 64, 3, 1)
            self.dropout1 = nn.Dropout(0.25)
            self.dropout2 = nn.Dropout(0.5)
            self.fc1 = nn.Linear(9216, 128)
            self.fc2 = nn.Linear(128, 10)

        def forward(self, x):
            x = self.conv1(x)
            x = F.relu(x)
            x = self.conv2(x)
            x = F.relu(x)
            x = F.max_pool2d(x, 2)
            x = self.dropout1(x)
            x = torch.flatten(x, 1)
            x = self.fc1(x)
            x = F.relu(x)
            x = self.dropout2(x)
            x = self.fc2(x)


            output = F.log_softmax(x, dim=1)
            return output

    def training_data(self):
        # Custom torch Dataloader for MNIST data
        transform = transforms.Compose([transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))])
        dataset1 = datasets.MNIST(self.dataset_path, train=True, download=False, transform=transform)
        train_kwargs = { 'shuffle': True}
        return DataManager(dataset=dataset1, **train_kwargs)
    
    def training_step(self, data, target):
        output = self.model().forward(data)
        loss   = torch.nn.functional.nll_loss(output, target)
        return loss
import torch import torch.nn as nn from fedbiomed.common.training_plans import TorchTrainingPlan from fedbiomed.common.data import DataManager from torchvision import datasets, transforms # Here we define the model to be used. # You can use any class name (here 'Net') class MyTrainingPlan(TorchTrainingPlan): # Defines and return model def init_model(self, model_args): return self.Net(model_args = model_args) # Defines and return optimizer def init_optimizer(self, optimizer_args): return torch.optim.Adam(self.model().parameters(), lr = optimizer_args["lr"]) # Declares and return dependencies def init_dependencies(self): deps = ["from torchvision import datasets, transforms"] return deps class Net(nn.Module): def __init__(self, model_args): super().__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output def training_data(self): # Custom torch Dataloader for MNIST data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) dataset1 = datasets.MNIST(self.dataset_path, train=True, download=False, transform=transform) train_kwargs = { 'shuffle': True} return DataManager(dataset=dataset1, **train_kwargs) def training_step(self, data, target): output = self.model().forward(data) loss = torch.nn.functional.nll_loss(output, target) return loss
In [ ]:
Copied!
model_args = {}

training_args = {
    'loader_args': { 'batch_size': 48, }, 
    'optimizer_args': {
        "lr" : 1e-3
    },
    'epochs': 1, 
    'dry_run': False,  
    'batch_maxnum': 100 # Fast pass for development : only use ( batch_maxnum * batch_size ) samples
}
model_args = {} training_args = { 'loader_args': { 'batch_size': 48, }, 'optimizer_args': { "lr" : 1e-3 }, 'epochs': 1, 'dry_run': False, 'batch_maxnum': 100 # Fast pass for development : only use ( batch_maxnum * batch_size ) samples }

Declare and run the experiment¶

Fed-BioMed uses Low-Overhead Masking (LOM) as the default secure aggregation scheme. If you followed the configuration steps to use Joye-Libert instead of LOM you can change secure aggregation by declaring the secure scheme as SecaggSchemes.JOYE_LIBERT.

from fedbiomed.researcher.secagg import SecureAggregation, SecureAggregationSchemes
exp = Experiment(
        ...
        secagg = SecureAggregation(scheme=SecaggSchemes.JOYE_LIBERT)
        ...    
)

The example below will run LOM by default.

In [ ]:
Copied!
from fedbiomed.researcher.federated_workflows import Experiment
from fedbiomed.researcher.aggregators.fedavg import FedAverage
from fedbiomed.researcher.secagg import SecureAggregation, SecureAggregationSchemes
tags =  ['#MNIST', '#dataset']
rounds = 2

exp = Experiment(tags=tags,
                 model_args=model_args,
                 training_plan_class=MyTrainingPlan,
                 training_args=training_args,
                 round_limit=rounds,
                 aggregator=FedAverage(),
                 node_selection_strategy=None,
                 secagg=SecureAggregation(), # or secagg=True
                 # secagg=SecureAggregation(scheme=SecureAggregationSchemes.JOYE_LIBERT),
                 save_breakpoints=True
                )
from fedbiomed.researcher.federated_workflows import Experiment from fedbiomed.researcher.aggregators.fedavg import FedAverage from fedbiomed.researcher.secagg import SecureAggregation, SecureAggregationSchemes tags = ['#MNIST', '#dataset'] rounds = 2 exp = Experiment(tags=tags, model_args=model_args, training_plan_class=MyTrainingPlan, training_args=training_args, round_limit=rounds, aggregator=FedAverage(), node_selection_strategy=None, secagg=SecureAggregation(), # or secagg=True # secagg=SecureAggregation(scheme=SecureAggregationSchemes.JOYE_LIBERT), save_breakpoints=True )

Access secure aggregation context¶

Please use the attribute secagg to verify secure aggregation is set as active

In [ ]:
Copied!
print("Is using secagg: ", exp.secagg.active)
print("Is using secagg: ", exp.secagg.active)

It is also possible to check secure aggregation context using secagg attribute. Since secure aggregation context negotiation will occur during experiment run, context and id should be None at this point.

In [ ]:
Copied!
print("Active: ", exp.secagg.active)
if exp.secagg.scheme == SecureAggregationSchemes.JOYE_LIBERT:
    print("Secagg Servkey ", exp.secagg.servkey)
else:
    print("Secagg context", exp.secagg.dh)
print("Active: ", exp.secagg.active) if exp.secagg.scheme == SecureAggregationSchemes.JOYE_LIBERT: print("Secagg Servkey ", exp.secagg.servkey) else: print("Secagg context", exp.secagg.dh)

Run the experiment, using secure aggregation. Secure aggregation context will be created before the first training round, and it is going to be updated before each round when new nodes are added or removed to the experiment.

In [ ]:
Copied!
exp.run(increase=True)
exp.run(increase=True)

Save trained model to file

In [ ]:
Copied!
exp.training_plan().export_model('./trained_model')
exp.training_plan().export_model('./trained_model')

Display context after running one round of training.

Context types

In the Joye-Libert scheme, the context refers to the keys that will be used for aggregation. However, in LOM, there is no need for an aggregation key since the sum of masked inputs directly results in the aggregation of the inputs. Therefore, the context in LOM reflects the setup status of each participating node, ensuring that they have successfully created their keying material.

In [ ]:
Copied!
print("Active: ", exp.secagg.active)
if exp.secagg.scheme == SecureAggregationSchemes.JOYE_LIBERT:
    print("Secagg Servkey context: ", exp.secagg.servkey.context)
else:
    print("Secagg context", exp.secagg.dh.context)
print("Active: ", exp.secagg.active) if exp.secagg.scheme == SecureAggregationSchemes.JOYE_LIBERT: print("Secagg Servkey context: ", exp.secagg.servkey.context) else: print("Secagg context", exp.secagg.dh.context)

Change in experiment triggers re-creation of secure aggregation context¶

The changes like adding new node to the experiment will trigger automatic secure aggregation re-setup for the next round.

In [ ]:
Copied!
# sends new dataset search request

exp.set_training_data(None, True)
# sends new dataset search request exp.set_training_data(None, True)
In [ ]:
Copied!
exp.run_once(increase=True)
exp.run_once(increase=True)

Changing arguments of secure aggregation¶

Setting secagg argument True in Experiment creates a default SecureAggregation instance. Additionally, It is also possible to create SecureAggregation instance and pass it as an argument. Here are the arguments that can be set for the SecureAggregation

  • active: True if the round will use secure aggregation. Default is True
  • clipping_range: Clipping range that is going be use for quantization of model parameters, which means model weights will be bounded in range [-clipping_range, clipping_range]. Default clipping range is 3. However, some models can have model weights greater than 3. If clipping range is exceeded during the encryption on the nodes, Experiment will log a warning message. In such cases, you can provide a higher clipping range through the argument clipping_range.
In [ ]:
Copied!
from fedbiomed.researcher.secagg import SecureAggregation
secagg = SecureAggregation(
    active=True, 
    clipping_range=100,
    # scheme = SecureAggregationSchemes.JOYE_LIBERT, # If secagg scheme Joye-Libert since the beginning of the tutorial
    
)
exp.set_secagg(secagg=secagg)
from fedbiomed.researcher.secagg import SecureAggregation secagg = SecureAggregation( active=True, clipping_range=100, # scheme = SecureAggregationSchemes.JOYE_LIBERT, # If secagg scheme Joye-Libert since the beginning of the tutorial ) exp.set_secagg(secagg=secagg)
In [ ]:
Copied!
exp.run_once(increase=True)
exp.run_once(increase=True)

Load experiment from a breakpoint¶

Once a breakpoint is loaded if the context is already existing there won't be context setup.

In [ ]:
Copied!
loaded_exp = Experiment.load_breakpoint()
loaded_exp.info()
loaded_exp = Experiment.load_breakpoint() loaded_exp.info()
In [ ]:
Copied!
print("Active: ", exp.secagg.active)
if exp.secagg.scheme == SecureAggregationSchemes.JOYE_LIBERT:
    print("Secagg Servkey context: ", exp.secagg.servkey.context)
else:
    print("Secagg context", exp.secagg.dh.context)
print("Active: ", exp.secagg.active) if exp.secagg.scheme == SecureAggregationSchemes.JOYE_LIBERT: print("Secagg Servkey context: ", exp.secagg.servkey.context) else: print("Secagg context", exp.secagg.dh.context)
In [ ]:
Copied!
loaded_exp.run_once(increase=True)
loaded_exp.run_once(increase=True)
In [ ]:
Copied!

Download Notebook
  • Setting up the nodes
    • Configuring/Installing Element for Secure Aggregation
      • 1. Create node and researcher instances
        • 1.1 Create nodes
        • 1.2 Create researcher
      • 2. Add dataset and start nodes
    • Define an experiment model and parameters"
    • Declare and run the experiment
    • Access secure aggregation context
      • Change in experiment triggers re-creation of secure aggregation context
    • Changing arguments of secure aggregation
    • Load experiment from a breakpoint
Address:

2004 Rte des Lucioles, 06902 Sophia Antipolis

E-mail:

fedbiomed _at_ inria _dot_ fr

Fed-BioMed © 2022