Skip to content

Designing and understanding the privacy properties of our algorithm #24

@MxmUrw

Description

@MxmUrw

There are two approaches for generating noise:

1. Sample from the "float" gaussian distribution

2. Sample from the discrete gaussian distribution

Papers:

  1. Deep Learning with Differential Privacy. here
  2. The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation. here

Questions:

  • Our distribution is not defined on all integers, but only on a subset. (as we need to prohibit overflow in the finite field that encodes our fixed point numbers) (Understand how being in a finite field instead of on the integers affects the distribution #22)
  • Paper 2 applies randomized rounding when converting the gradient vector into fixed-point representation. It seems like this is not done because of privacy guarantees. Simply rounding towards 0 should be enough for privacy, because this way the norm stays below 1, so the argument that the overall function is 1-sensitive applies. But is this true?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions