we know the privacy guarantees for one training step, we need to infer that of the whole training procedure. we could:
- adjust the noise to the number of steps (if we are willing to force that to be constant instead of training until some accuracy is reached)
- use a privacy accountant as done here (and everywhere else i think)
we know the privacy guarantees for one training step, we need to infer that of the whole training procedure. we could: