Conversation
Others like tpch/{spark,polars,duckdb} don't benefit from being regularly tested or like runtime, we don't care about.
Originally I was going to add a `pytest.mark.skip` but then I realized
that that would make it hard to explicitly run certain tests. I think
that instead we want to make CI more specific about what it cares about.
| fi | ||
|
|
||
| python -m pytest $EXTRA_OPTIONS $@ | ||
| python -m pytest $EXTRA_OPTIONS $@ tests/{benchmarks,stability,workflows,tpch/test_dask.py} |
There was a problem hiding this comment.
Won't this always ignore the scheduled runs of --tpch-non-dask then?
benchmarks/.github/workflows/tests.yml
Line 110 in 158b07c
There was a problem hiding this comment.
I don't have any desire to run non-Dask TPC-H benchmarks on a schedule. Is there some motivation for this? Or is this just something we're doing because historically we tend to run things on a daily basis?
If so, maybe that makes sense for projects that are included in git-tip because the code used in the benchmarks changes. However, for these projects their software is pinned. I see no reason to run them on any schedule except manually.
Open to disagreement though if people have other thoughts.
There was a problem hiding this comment.
This was only introduced by #1083 sparked from this comment: #1044 (comment) by @fjetter
...I want this to run somewhat regularly (every commit, once a day, etc.)
There was a problem hiding this comment.
Understood. I disagree with that comment.
@fjetter any objection to not running these benchmarks on a regular basis?
Others like tpch/{spark,polars,duckdb} don't benefit from being regularly tested or like runtime, we don't care about.
Originally I was going to add a
pytest.mark.skipbut then I realized that that would make it hard to explicitly run certain tests. I think that instead we want to make CI more specific about what it cares about.