Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1175,8 +1175,10 @@ def __call__(
- (self.denoising_end * self.scheduler.config.num_train_timesteps)
)
)
num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
timesteps = timesteps[:num_inference_steps]
num_inference_steps = (
(torch.as_tensor(timesteps)[:: self.scheduler.order] >= discrete_timestep_cutoff).sum().item()
)
timesteps = timesteps[: num_inference_steps * self.scheduler.order]
Comment on lines -1178 to +1181
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

technically this might change the current results with Heun but it's necessary because otherwise it'll split wrong on a butcher tableaux with non-sequential coefficients like

                          RKZ.Butcher6
+0.0    | 
+0.2764 | +0.2764
+0.7236 | -0.2236 +0.9472
+0.2764 | +0.0326 +0.309  -0.0652
+0.7236 | +0.0461 +0.0    +0.1667 +0.5109
+0.2764 | +0.1206 +0.0    -0.1817 +0.1667 +0.1708
+1.0    | +0.1667 +0.0    +0.0751 -3.3877 +0.5279 +3.618 
-----------------------------------------------------------------
        | +0.0833 +0.0    +0.0    +0.0    +0.4167 +0.4167 +0.0833

Where it could split on stage 3, but the following stages contain lesser timestep values, and since the refiner is not trained on earlier timesteps this will lead to worse results.


# 9. Optionally get Guidance Scale Embedding
timestep_cond = None
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -666,18 +666,11 @@ def get_timesteps(self, num_inference_steps, strength, device, denoising_start=N
)
)

num_inference_steps = (self.scheduler.timesteps < discrete_timestep_cutoff).sum().item()
if self.scheduler.order == 2 and num_inference_steps % 2 == 0:
# if the scheduler is a 2nd order scheduler we might have to do +1
# because `num_inference_steps` might be even given that every timestep
# (except the highest one) is duplicated. If `num_inference_steps` is even it would
# mean that we cut the timesteps in the middle of the denoising step
# (between 1st and 2nd derivative) which leads to incorrect results. By adding 1
# we ensure that the denoising process always ends after the 2nd derivate step of the scheduler
num_inference_steps = num_inference_steps + 1
Comment on lines -670 to -677
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on this comment, it was previously hardcoded specifically for Heun's method, and anything else is 100% broken. Thing is, Heun appears to be the only higher order singlestep solver in Diffusers, so I guess we can't add tests for this yet?

real_timesteps = self.scheduler.timesteps[:: self.scheduler.order]
num_inference_steps = (real_timesteps < discrete_timestep_cutoff).sum().item()

# because t_n+1 >= t_n, we slice the timesteps starting from the end
t_start = len(self.scheduler.timesteps) - num_inference_steps
t_start = (len(real_timesteps) - num_inference_steps) * self.scheduler.order
timesteps = self.scheduler.timesteps[t_start:]
if hasattr(self.scheduler, "set_begin_index"):
self.scheduler.set_begin_index(t_start)
Expand Down
Loading