Optimize synthetic_data for large datasets#99
Conversation
Refactored the `synthetic_data` method in `MarkovRandomField` to significantly reduce memory usage. Previously, the method broadcasted conditional CDFs to all rows based on their parent configuration, resulting in O(N * D) memory usage where N is the number of rows and D is the domain size of the attribute being generated. This caused OOM errors for large N and large D. The new implementation: 1. Identifies unique parent configurations using `np.unique`. 2. Computes conditional CDFs only for these unique configurations. 3. Groups rows by parent configuration. 4. Uses `np.searchsorted` to perform inverse transform sampling for each group, avoiding the need to materialize the full N x D array. This approach scales roughly as O(U * D + N), where U is the number of unique parent configurations, which is typically much smaller than N. Verified with existing tests and a reproduction script based on issue #98. Co-authored-by: ryan112358 <8495634+ryan112358@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Refactored the `synthetic_data` method in `MarkovRandomField` to significantly reduce memory usage. Previously, the method broadcasted conditional CDFs to all rows based on their parent configuration, resulting in O(N * D) memory usage where N is the number of rows and D is the domain size of the attribute being generated. This caused OOM errors for large N and large D. The new implementation: 1. Identifies unique parent configurations using `np.unique`. 2. Computes conditional CDFs only for these unique configurations. 3. Groups rows by parent configuration. 4. Uses `np.searchsorted` to perform inverse transform sampling for each group, avoiding the need to materialize the full N x D array. This approach scales roughly as O(U * D + N), where U is the number of unique parent configurations, which is typically much smaller than N. Benchmark Findings: - JIT Compilation (N=1): ~27.5s (vs 11.6s baseline) - N=1000: 0.27s (vs 0.17s) - N=10000: 1.09s (vs 0.90s) - N=100000: 7.46s (vs 8.58s) - N=1M: Runs without OOM, though possibly slower due to overhead for small domains. The memory reduction is the primary benefit, enabling scalability to large N and D where the previous implementation would fail. Co-authored-by: ryan112358 <8495634+ryan112358@users.noreply.github.com>
Addressed memory scalability issue in
MarkovRandomField.synthetic_databy replacing the dense broadcasting approach with a batched, unique-parent-based approach usingnp.searchsorted. This prevents OOM errors when generating large datasets with large-cardinality attributes.PR created automatically by Jules for task 8841611121634811676 started by @ryan112358