Object detection adversary in LightningModule#115
Conversation
mart/attack/enforcer.py
Outdated
| def __init__(self, **modality_constraints: dict[str, dict[str, Constraint]]) -> None: | ||
| self.modality_constraints = modality_constraints | ||
| # Prepare for modality_dispatch(). | ||
| self.modality_func = { |
There was a problem hiding this comment.
Maybe self._enforce_modality? What's the benefit of creating a partial here ?
There was a problem hiding this comment.
It does feel like the partial is unnecessary given that you can pass self._enforce and pass modality in modality_dispatch.
There was a problem hiding this comment.
Good idea. I have revised modality_dispatch() to support modality_func() that accepts modality as a keyword argument.
mart/attack/perturber.py
Outdated
| def configure_perturbation(self, input: torch.Tensor | tuple | tuple[dict[str, torch.Tensor]]): | ||
| def create_and_initialize(data, *, input, target, modality="default"): | ||
| # Though data and target are not used, they are required placeholders for modality_dispatch(). | ||
| pert = torch.empty_like(input, requires_grad=True) |
There was a problem hiding this comment.
I think you need to specify dtype=torch.float to override input's dtype. If input is uint8, we probably don't want the perturbation to uint8 too.
There was a problem hiding this comment.
Fixed.
I guess this won't affect mixed precision training?
There was a problem hiding this comment.
That's a good point. I'm not sure and it isn't really something I have thought about.
| __all__ = ["modality_dispatch"] | ||
|
|
||
|
|
||
| def modality_dispatch( |
There was a problem hiding this comment.
If you want to get really fancy you could use functools.singledispatch.
There was a problem hiding this comment.
Does this obviate the need for Projector and Composer to be batch aware? If so, I would remove that code.
There was a problem hiding this comment.
Yes, we can make Projector and Composer simpler with modality_dispatch.
There was a problem hiding this comment.
Can you remove the batch code from Projector and Composer then?
mart/attack/perturber.py
Outdated
| pert = torch.empty_like(inp, dtype=torch.float, requires_grad=True) | ||
| self.initializer(pert) | ||
| def configure_perturbation(self, input: torch.Tensor | tuple | tuple[dict[str, torch.Tensor]]): | ||
| def create_and_initialize(data, *, input, target, modality="default"): |
There was a problem hiding this comment.
Can you make "default" a constant somewhere?
There was a problem hiding this comment.
I made it a class constant.
mart/attack/perturber.py
Outdated
| # Recursively configure perturbation in tensor. | ||
| # Though only input=input is used, we have to fill the placeholders of data and target. | ||
| self.perturbation = modality_dispatch( | ||
| modality_func, input, input=input, target=input, modality="default" |
There was a problem hiding this comment.
Why does target=None not work? target=input is confusing...
There was a problem hiding this comment.
If input is a list or tuple, target=None won't work.
There was a problem hiding this comment.
Can you make it work? I'm guessing you need something that zips well with input? You can use target = cycle([None]) if target is None.
mart/attack/perturber.py
Outdated
| @property | ||
| def parameter_groups(self): | ||
| """Extract parameter groups for optimization from perturbation tensor(s).""" | ||
| param_groups = self._parameter_groups(self.perturbation) | ||
| return param_groups |
There was a problem hiding this comment.
Why is this a property? It is only used for testing purposes? If so, then there's probably something wrong with the tests...
There was a problem hiding this comment.
The property decorator is removed.
mart/attack/perturber.py
Outdated
| self.perturbation = create_and_initialize(input) | ||
| raise ValueError(f"Unsupported data type of input: {type(pert)}.") | ||
|
|
||
| def project(self, perturbation, *, input, target, **kwargs): |
There was a problem hiding this comment.
Rename to project_ to adhere to in-place naming scheme.
| return input * (1 - mask) + perturbation * mask | ||
|
|
||
|
|
||
| class MaskAdditive(Composer): |
| gradient_modifier: GradientModifier | dict[str, GradientModifier] | None = None, | ||
| projector: Projector | dict[str, Projector] | None = None, | ||
| objective: Objective | None = None, | ||
| optim_params: dict[str, dict[str, Any]] | None = None, |
There was a problem hiding this comment.
Should we just make optimizer a Callable | dict[str, Callable]? That will be more uniform like the existing changes.
There was a problem hiding this comment.
There is no difference between multiple optimizers and a single optimizer with multiple parameter groups. We just have to make sure we step the multiple optimizer approach in sync.
There was a problem hiding this comment.
It will probably simplify a lot of the code changes to the parameters.
| class Perturber(pl.LightningModule): | ||
| """Peturbation optimization module.""" | ||
|
|
||
| MODALITY_DEFAULT = "default" |
There was a problem hiding this comment.
I would make this a module-level constant. Will get rid of the self. prefix in self.MODALITY_DEFAULT.
There was a problem hiding this comment.
Actually, this constant should be moved to modality_dispatch.py and you should import it from there.
| *, | ||
| input: Tensor | tuple | list[Tensor] | dict[str, Tensor], | ||
| target: torch.Tensor | dict[str, Any] | list[dict[str, Any]] | None, | ||
| modality: str = "default", |
There was a problem hiding this comment.
This should be MODALITY_DEFAULT.
| # When an Adversary takes input from another module in the sequence, we would have to specify kwargs of Adversary, and model would be a required kwarg. | ||
| outputs = model(**batch, model=None) |
There was a problem hiding this comment.
Is there an example test where this happens?
| __all__ = ["modality_dispatch"] | ||
|
|
||
|
|
||
| def modality_dispatch( |
There was a problem hiding this comment.
Can you remove the batch code from Projector and Composer then?
What does this PR do?
This PR adds support to object detection adversary. As the implementation is modality-aware, it should not be hard to implement RGB-Depth adversary.
mart \ experiment=ArmoryCarlaOverObjDet_TorchvisionFasterRCNN \ fit=false \ +trainer.limit_test_batches=1 \ trainer=gpu \ +attack@model.modules.input_adv_test=object_detection_rgb_mask_adversary \ +model.test_sequence.seq005=input_adv_test \ model.test_sequence.seq010.preprocessor=["input_adv_test"]Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest testsBefore submitting
pre-commit run -acommand without errorsDid you have fun?
Make sure you had fun coding 🙃