Skip to content

Document how to use a component reading to create a new source#131

Draft
nvaytet wants to merge 3 commits intoinelastic-samplefrom
reading-source-reuse
Draft

Document how to use a component reading to create a new source#131
nvaytet wants to merge 3 commits intoinelastic-samplefrom
reading-source-reuse

Conversation

@nvaytet
Copy link
Member

@nvaytet nvaytet commented Mar 5, 2026

We add section to the docs where we show how to use a component reading to create a more efficient source.

We histogram the neutrons at the detector in wavelength and birth_time, smooth the histogram, and use it as a new distribution to sample from.

Screenshot_20260305_151531 Screenshot_20260305_151553

Can speed up calculations.
We should consider making the final function into a classmethod of the Source: Source.from_reading(...).

@nvaytet nvaytet requested a review from jokasimr March 5, 2026 14:27
@nvaytet nvaytet marked this pull request as draft March 5, 2026 15:05
@jokasimr
Copy link

jokasimr commented Mar 5, 2026

Interesting! Could give a big speedup in some cases.

However, I'd like to suggest a modification. Currently the probability of sampling neutrons from point $(\lambda, t, t^o)$ where ($t$ is the time in the detector, and $t^o$ is the birth time) with the new source will change from what it was using the original source. It might be close to the same, and maybe it does not matter in practice. But we can actually make them be the same, with a little bit more work.

If we have a source distribution $q_0$ and we have identified a region $(\lambda, t^o) \in M$ such that outside $M$ the probability the neutron passes the chopper cascade is essentially zero. Then we can define a new source probability density

$$q(\lambda, t^o) = \mathcal{1}_M \frac{q_0(\lambda, t^o)}{\int_M q_0(\lambda, t^o) d\lambda \ d t^o}$$

such that the probability density of $\lambda, t^o$ given that the neutron passed the chopper cascade is the same for the source $q_0$ as it is for $q$.

Intuitively, with the new source we only increase the probability of sampling in the region where we actually need to sample, but in that region, the ratio of the probabilities of sampling two points does not change.

After applying the gaussian kernel to $p$ I'd say we have identified $M$. It's the region where $p>0$ (or perhaps where $p>\varepsilon$ for some small $\varepsilon$ because the gaussian kernel might introduce some nonzero weight everywhere, I'm not sure).

I'm imagining it would look something like this:

def source_from_reading(\n",
    reading,
    original_source,
    neutrons: int,
    time_smooth_kernel: sc.Variable | None = None,
    wavelength_smooth_kernel: sc.Variable | None = None,
):
    from scipp.scipy.ndimage import gaussian_filter

    p = reading.data.hist(wavelength=original_source.coords['wavelength'], birth_time=original_source.coords['birth_time'])
    if time_smooth_kernel is None:
        time_smooth_kernel = 2 * time_binning
    if wavelength_smooth_kernel is None:
        wavelength_smooth_kernel = 2 * wavelength_binning
    p = gaussian_filter(
        p,
        sigma={
            'birth_time': time_smooth_kernel,
            'wavelength': wavelength_smooth_kernel,
        },
    )
    q = original_source * (p > 1e-8) / (original_source * (p > 1e-8)).sum()
    return tof.Source.from_distribution(neutrons=neutrons, p=q)"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants