Thank you for your interest in contributing to PyNumDiff! To get a sense of the project structure, poke around, and digest the pertinent READMEs.
If you discover a bug or have an improvement idea or question, the place to start is the Issues page, which is really the beating heart of any project (even if you're just here to give us kudos). Take a look through the history to get a sense of what has been done and which ideas have been considered before. A lot of hard-won knowledge and tough decisions have been explored and documented in the Issues. Feel free to open new issues if we haven't covered something.
If reporting bugs, make sure you're on the latest version and that we haven't already taken care of something. Please include some or all of:
- Descriptive title
- What happened:
- What you were trying to do
- What you expected to happen
- What actually happened
- Minimal code example that reproduces the issue
- Environment information: Python and library versions (
pynumdiff,numpy,scipy, anything salient) - Error messages or stack traces
- Additional context (screenshots, data files, etc.)
If we've got an ongoing or old discussion on a topic, and you can manage to find it, tack on discussion there. If your idea is otherwise within the scope of the project, start a new discussion. Let us know why you think something is necessary, and please feel free to suggest what would need to change to make it happen. The more investigation and thinking you do to show the feasibility and practicality of something, the more load that takes off other maintainers.
Here are some things you might include:
- Descriptive title
- Problem statement: What problem does this feature solve?
- Proposed solution: How would you implement it?
- Alternatives considered: What other approaches did you consider?
- Additional context: Examples, use cases, etc.
Look for issues labeled good first issue if you're new to the project. Talk to us, and we can suggest things that need to be done, of varying levels of code and research difficulty.
Some issues will require going and digging into alternative methods of differentiation so they can be added to our collection, or comparing a new or modified method to other methods. This kind of work requires some mathematical chops, but if you're down, we're happy about it.
Bear in mind that smaller, focused PRs are generally easier to review. We encourage descriptive commit messages that explain what changed and why. Long, detailed commit messages are appreciated as they help others understand the project's history.
- Fork the repository (button on the main repo page)
- Clone down your version (
git clone https://github.com/YOUR_USERNAME/PyNumDiff.git) - Set its upstream to point to this version so you can easily pull our changes (
git remote add upstream https://github.com/florisvb/PyNumDiff.git) - Install the package in development mode (
pip install -e .) as well as dependencies likenumpy,pytest,cvxpy, etc. - Create a branch for your changes (
cd PyNumDiff;git checkout -b your-feature) - Make your changes and commit (
git add file;git commit -m "descriptive commit message") - Update your fork with the latest changes from upstream (
git fetch upstream;git checkout master;git merge upstream/master) - Run the tests to make sure they pass (
pytest -s, with helpful--plotor--boundsflags for debugging), possibly adding new ones - Push your code up to the cloud (
git push) - Submit a pull request ("PR") (via the pull requests page on the website)
- We'll review, leave comments, kick around further change ideas, and merge.
No strict coding style is enforced, although we consider docstrings to be very important. The project uses pylint for code quality checks (pylint pynumdiff), because we're trying to meet a high bar so the JOSS (Journal of Open Source Software) likes us.
Once you push, GitHub Actions will kick off our continuous integration job, which runs the tests, including:
test_diff_methods: Broadly tests for correctness and ability to actually differentiatetest_utils: Contains tests of supporting and miscellaneous functionality like simulations and evaluation metricstest_optimize: Tests the hyperparameter optimization code