Skip to content

Problem with RAFT's sequence loss when args.train_iters is 1 #44

@dinaabouzeid

Description

@dinaabouzeid

assert n_predictions >= 1
disp_loss = 0.0
# exlude invalid pixels and extremely large diplacements
mag = torch.sum(disp_gt ** 2, dim=1).sqrt()
# exclude extremly large displacements
valid = ((valid >= 0.5) & (mag < max_disp)).unsqueeze(1)
assert valid.shape == disp_gt.shape, [valid.shape, disp_gt.shape]
assert not torch.isinf(disp_gt[valid.bool()]).any()
for i in range(n_predictions):
assert not torch.isnan(disp_preds[i]).any() and not torch.isinf(disp_preds[i]).any()
# We adjust the loss_gamma so it is consistent for any number of Selective-RAFT iterations
adjusted_loss_gamma = loss_gamma ** (15 / (n_predictions - 1))
i_weight = adjusted_loss_gamma ** (n_predictions - i - 1)

does the sequence loss not work with 1 iteration? the assertion line

assert n_predictions >= 1

allows it. but this would result to a division by 0 in the following lines.

adjusted_loss_gamma = loss_gamma ** (15 / (n_predictions - 1)) ~~~^~~~~~~~~~~~~~~~~~~~~ ZeroDivisionError: division by zero

I'm asking because i'm trying to replicate selective stereo's training pipeline but with another model that doesn't use iterative improvement.

Any guidance?
Thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions