diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 000000000..991077d49 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,139 @@ +# Copilot Instructions for CSLib + +## Repository Overview + +**CSLib** is the Lean library for Computer Science (CS), formalising CS theories and tools in the Lean programming language. It provides APIs for formal verification, certified software, and connecting different CS developments. + +- **Language:** Lean 4 +- **Build System:** Lake +- **Primary Dependency:** Mathlib (leanprover-community/mathlib4) +- **Project Type:** Lean library + +## Build & Validation Commands + +**Always run commands from the repository root.** The project uses `lake`. + +### Essential Commands (in order of typical usage) + +| Command | Purpose | When to Use | +|---------|---------|-------------| +| `lake build` | Build the library | After any code change | +| `lake build --wfail --iofail` | Build with CI strictness (fails on warnings) | **Always use before committing** | +| `lake test` | Run all tests (builds CslibTests + checks init imports) | After changes to verify correctness | +| `lake lint` | Run environment linters (Batteries/Mathlib) | Before committing | +| `lake exe lint-style` | Run text-based style linters | Before committing | +| `lake exe mk_all --module --check` | Verify Cslib.lean imports all modules | After adding new files | +| `lake exe mk_all --module` | Auto-update Cslib.lean imports | After adding new files | + +### Full CI Validation Sequence + +Run these commands **in order** to replicate CI checks locally: + +```bash +lake build --wfail --iofail +lake exe mk_all --module --check +lake test +lake lint +lake exe lint-style +``` + +### Additional Commands + +| Command | Purpose | +|---------|---------| +| `lake clean` | Remove build outputs (use if build state is corrupted) | +| `lake update` | Update dependencies (rarely needed) | +| `lake exe lint-style --fix` | Auto-fix style errors | +| `lake exe shake Cslib` | Check for minimized imports | + +## Project Structure + +``` +/ +├── Cslib.lean # Root module (imports all library files) +├── CslibTests.lean # Root test module +├── CONTRIBUTING.md # Contribution guidelines +├── lakefile.toml # Lake configuration (linters, dependencies) +├── lean-toolchain # Lean version specification +├── lake-manifest.json # Locked dependency versions +├── Cslib/ # Main library source +│ ├── Init.lean # Must be imported by all Cslib modules +│ ├── Foundations/ # General-purpose definitions (semantics, data types, etc.) +│ ├── Computability/ # Automata and computability theory +│ ├── Languages/ # Programming language formalisations (e.g., Calculus of Communicating Systems, Lambda Calculus) +│ └── Logics/ # Logic formalisations (e.g., Linear Logic, Hennessy-Milner Logic) +├── CslibTests/ # Test files +├── scripts/ # Build and maintenance scripts +│ ├── noshake.json # Import exceptions for shake tool +│ └── nolints.json # Lint exceptions +└── .github/workflows/ # CI workflows +``` + +## Critical Requirements + +### 1. All Cslib Modules Must Import Cslib.Init +Every file in `Cslib/` must transitively import `Cslib/Init.lean`. This sets up default linters and tactics. The test suite verifies this. + +**Exceptions** (documented in `scripts/CheckInitImports.lean`): +- `Cslib.Foundations.Lint.Basic` (circular dependency) +- `Cslib.Init` itself + +### 2. New Files Must Be Added to Cslib.lean +When creating a new `.lean` file in `Cslib/`, add its import to `Cslib.lean`. Run: +```bash +lake exe mk_all --module +``` + +### 3. PR Title Convention +PR titles **must** follow the format: `type(scope): description` + +Valid types: `feat`, `fix`, `doc`, `style`, `refactor`, `test`, `chore`, `perf` + +Examples: +- `feat(LTS): add weak bisimulation` +- `fix(Lambda): correct substitution lemma` +- `doc: improve README` + +### 4. File Headers +Every `.lean` file must have a copyright header: +```lean +/- +Copyright (c) $YEAR $AUTHOR_NAME. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: $LIST_OF_AUTHORS +-/ +``` + +## Code Style + +- Follow everything written in /CONTRIBUTING.md +- Follow [Mathlib style guide](https://leanprover-community.github.io/contribute/style.html) +- Use domain-specific variable names (e.g., `State` for state types, `μ` for transition labels) +- Keep proofs readable; golfing is welcome if proofs remain clear +- Use existing typeclasses for common concepts (transitions, reductions) +- Use `module` keyword at the start of files with `public import` statements + +## Linter Configuration + +Linters are configured in `lakefile.toml`. + +## Common Patterns + +### Creating a New Module +1. Create file in appropriate `Cslib/` subdirectory +2. Add `import Cslib.Init` (or import a module that imports it) +3. Run `lake exe mk_all --module` +4. Run `lake build --wfail --iofail` +5. Run `lake test` to verify init imports + +### Adding Tests +1. Create or modify a file in `CslibTests/` +2. Add import to `CslibTests.lean` if new file +3. Run `lake test` + +## Trust These Instructions + +Only search for additional information if: +- A command fails with an unexpected error +- You need details about a specific module's API +- The instructions appear incomplete for your specific task diff --git a/.github/workflows/bump_toolchain_nightly-testing.yml b/.github/workflows/bump_toolchain_nightly-testing.yml index cd5022183..7bab15c8f 100644 --- a/.github/workflows/bump_toolchain_nightly-testing.yml +++ b/.github/workflows/bump_toolchain_nightly-testing.yml @@ -12,20 +12,37 @@ jobs: if: github.repository == 'leanprover/cslib' runs-on: ubuntu-latest steps: + - name: Generate app token + id: app-token + uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1 + with: + app-id: ${{ secrets.MATHLIB_NIGHTLY_TESTING_APP_ID }} + private-key: ${{ secrets.MATHLIB_NIGHTLY_TESTING_PRIVATE_KEY }} + # The create-github-app-token README states that this token is masked and will not be logged accidentally. + - name: Checkout code uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 with: ref: nightly-testing # checkout nightly-testing branch - token: ${{ secrets.NIGHTLY_TESTING }} + token: ${{ steps.app-token.outputs.token }} - name: Get latest release tag from leanprover/lean4-nightly id: get-latest-release + env: + GH_TOKEN: ${{ steps.app-token.outputs.token }} run: | - RELEASE_TAG="$(curl -s "https://api.github.com/repos/leanprover/lean4-nightly/releases" | jq -r '.[0].tag_name')" + RELEASE_TAG=$(gh api -X GET repos/leanprover/lean4-nightly/releases \ + -f per_page=1 --jq '.[0].tag_name') + if [ -z "$RELEASE_TAG" ] || [ "$RELEASE_TAG" = "null" ]; then + echo "::error::Could not determine latest lean4-nightly release" + exit 1 + fi echo "RELEASE_TAG=$RELEASE_TAG" >> "${GITHUB_ENV}" - name: Check if nightly-testing tag exists in mathlib4-nightly-testing id: check-nightly-testing + env: + GH_TOKEN: ${{ steps.app-token.outputs.token }} run: | # Extract date from RELEASE_TAG (format: nightly-YYYY-MM-DD) DATE_PART=$(echo "$RELEASE_TAG" | sed 's/nightly-//') @@ -33,7 +50,7 @@ jobs: echo "NIGHTLY_TESTING_TAG=$NIGHTLY_TESTING_TAG" >> "${GITHUB_ENV}" # Check if the tag exists in leanprover-community/mathlib4-nightly-testing - if curl -s -f "https://api.github.com/repos/leanprover-community/mathlib4-nightly-testing/git/ref/tags/${NIGHTLY_TESTING_TAG}" > /dev/null; then + if gh api "repos/leanprover-community/mathlib4-nightly-testing/git/ref/tags/${NIGHTLY_TESTING_TAG}" > /dev/null 2>&1; then echo "Tag ${NIGHTLY_TESTING_TAG} exists in mathlib4-nightly-testing" echo "tag_exists=true" >> "${GITHUB_OUTPUT}" else @@ -70,10 +87,8 @@ jobs: - name: Commit and push changes if: steps.check-nightly-testing.outputs.tag_exists == 'true' run: | - # For now we reuse a bot managed by Mathlib, - # but it is fine to update this if Cslib wants to create its own bot accounts. - git config user.name "leanprover-community-mathlib4-bot" - git config user.email "leanprover-community-mathlib4-bot@users.noreply.github.com" + git config user.name "mathlib-nightly-testing[bot]" + git config user.email "mathlib-nightly-testing[bot]@users.noreply.github.com" git add lean-toolchain lakefile.toml lake-manifest.json # Don't fail if there's nothing to commit git commit -m "chore: bump to ${RELEASE_TAG} with mathlib at ${NIGHTLY_TESTING_TAG}" || true diff --git a/.github/workflows/merge_main_into_nightly-testing.yml b/.github/workflows/merge_main_into_nightly-testing.yml index 38860a8cd..3d73be5c5 100644 --- a/.github/workflows/merge_main_into_nightly-testing.yml +++ b/.github/workflows/merge_main_into_nightly-testing.yml @@ -13,13 +13,21 @@ jobs: if: github.repository == 'leanprover/cslib' runs-on: ubuntu-latest steps: - - name: Checkout nightly-testing + - name: Generate app token + id: app-token + uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1 + with: + app-id: ${{ secrets.MATHLIB_NIGHTLY_TESTING_APP_ID }} + private-key: ${{ secrets.MATHLIB_NIGHTLY_TESTING_PRIVATE_KEY }} + # The create-github-app-token README states that this token is masked and will not be logged accidentally. + + - name: Checkout nightly-testing uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 with: repository: leanprover/cslib ref: nightly-testing path: nightly-testing - token: ${{ secrets.NIGHTLY_TESTING }} # This secret needs repo access to leanprover/cslib + token: ${{ steps.app-token.outputs.token }} fetch-depth: 0 - name: Configure Lean @@ -33,10 +41,8 @@ jobs: - name: Configure Git User run: | cd nightly-testing - # For now we reuse a bot managed by Mathlib, - # but it is fine to update this if Cslib wants to create its own bot accounts. - git config user.name "leanprover-community-mathlib4-bot" - git config user.email "leanprover-community-mathlib4-bot@users.noreply.github.com" + git config user.name "mathlib-nightly-testing[bot]" + git config user.email "mathlib-nightly-testing[bot]@users.noreply.github.com" - name: Merge main to nightly-testing favoring nightly-testing changes run: | @@ -58,5 +64,5 @@ jobs: # If there's nothing to do (because there are no new commits from main), # that's okay, hence the '|| true'. git commit -m "Merge main into nightly-testing" || true - # Push + # Push git push origin nightly-testing diff --git a/.github/workflows/pr-title.yml b/.github/workflows/pr-title.yml index c1d1e0b3e..ff26c85c1 100644 --- a/.github/workflows/pr-title.yml +++ b/.github/workflows/pr-title.yml @@ -18,6 +18,6 @@ jobs: script: | const msg = context.payload.pull_request? context.payload.pull_request.title : context.payload.merge_group.head_commit.message; console.log(`Message: ${msg}`) - if (!/^(feat|fix|doc|style|refactor|test|chore|perf)(\(.*\))?: .*[^.]($|\n\n)/.test(msg)) { + if (!/^(ci|feat|fix|doc|style|refactor|test|chore|perf)(\(.*\))?: .*[^.]($|\n\n)/.test(msg)) { core.setFailed('PR title does not follow the Commit Convention (https://leanprover.github.io/lean4/doc/dev/commit_convention.html).'); } diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml new file mode 100644 index 000000000..c6a54356a --- /dev/null +++ b/.github/workflows/release.yml @@ -0,0 +1,21 @@ +name: Release + +on: + push: + tags: + - "v[0-9]+.[0-9]+.[0-9]+" + - "v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+" + +permissions: + contents: write + +jobs: + release: + runs-on: ubuntu-latest + if: github.repository == 'leanprover/cslib' + steps: + - name: Create GitHub Release + uses: softprops/action-gh-release@v2 + with: + prerelease: ${{ contains(github.ref, 'rc') }} + make_latest: ${{ !contains(github.ref, 'rc') }} diff --git a/.github/workflows/report_failures_nightly-testing.yml b/.github/workflows/report_failures_nightly-testing.yml index 3dbe4d6ca..bdbd87881 100644 --- a/.github/workflows/report_failures_nightly-testing.yml +++ b/.github/workflows/report_failures_nightly-testing.yml @@ -35,12 +35,20 @@ jobs: runs-on: ubuntu-latest steps: + - name: Generate app token + id: app-token + uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1 + with: + app-id: ${{ secrets.MATHLIB_NIGHTLY_TESTING_APP_ID }} + private-key: ${{ secrets.MATHLIB_NIGHTLY_TESTING_PRIVATE_KEY }} + # The create-github-app-token README states that this token is masked and will not be logged accidentally. + - name: Checkout code uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 with: ref: nightly-testing # checkout nightly-testing branch fetch-depth: 0 # checkout all branches so that we can push from `nightly-testing` to `nightly-testing-YYYY-MM-DD` - token: ${{ secrets.NIGHTLY_TESTING }} + token: ${{ steps.app-token.outputs.token }} - name: Update the nightly-testing-YYYY-MM-DD branch run: | toolchain="$( bump_version: print('Lean toolchain in `nightly-testing` is ahead of the bump branch.') - # Get the last message in the 'Cslib bump branch reminders' topic + # Get the last message from the bot in the 'Cslib bump branch reminders' topic. + # We narrow by sender to ignore human replies in between. + bot_email = 'github-mathlib4-bot@leanprover.zulipchat.com' request = { 'anchor': 'newest', 'num_before': 1, 'num_after': 0, - 'narrow': [{'operator': 'stream', 'operand': 'nightly-testing'}, {'operator': 'topic', 'operand': 'Cslib bump branch reminders'}], + 'narrow': [ + {'operator': 'stream', 'operand': 'nightly-testing'}, + {'operator': 'topic', 'operand': 'Cslib bump branch reminders'}, + {'operator': 'sender', 'operand': bot_email} + ], 'apply_markdown': False # Otherwise the content test below fails. } response = client.get_messages(request) messages = response['messages'] + last_bot_message = messages[0] if messages else None bump_branch_suffix = bump_branch.replace('bump/', '') failed_link = f"https://github.com/{repository}/actions/runs/{current_run_id}" payload = f"🛠️: Automatic PR creation [failed]({failed_link}). Please create a new bump/nightly-{current_version} branch from nightly-testing (specifically {sha}), and then PR that to {bump_branch}. " payload += "To do so semi-automatically, run the following script from Cslib root:\n\n" payload += f"```bash\n./scripts/create-adaptation-pr.sh --bumpversion={bump_branch_suffix} --nightlydate={current_version} --nightlysha={sha}\n```\n" - # Only post if the message is different - # We compare the first 160 characters, since that includes the date and bump version - if not messages or messages[0]['content'][:160] != payload[:160]: - # Log messages, because the bot seems to repeat itself... - if messages: - print("###### Last message:") - print(messages[0]['content']) + # Check if we already posted a message for this nightly date and bump branch. + # We extract these fields from the last bot message rather than comparing substrings, + # since the message also contains a run ID that differs between workflow runs. + should_post = True + if last_bot_message: + last_content = last_bot_message['content'] + # Extract nightly date and bump branch from last bot message + date_match = re.search(r'bump/nightly-(\d{4}-\d{2}-\d{2})', last_content) + branch_match = re.search(r'PR that to (bump/v[\d.]+)', last_content) + if date_match and branch_match: + last_date = date_match.group(1) + last_branch = branch_match.group(1) + if last_date == current_version and last_branch == bump_branch: + should_post = False + print(f'Already posted for nightly {current_version} and {bump_branch}') + if should_post: + if last_bot_message: + print("###### Last bot message:") + print(last_bot_message['content']) print("###### Current message:") print(payload) - else: - print('The strings match!') # Post the reminder message request = { 'type': 'stream', diff --git a/.github/workflows/shellcheck.yml b/.github/workflows/shellcheck.yml new file mode 100644 index 000000000..ea590a036 --- /dev/null +++ b/.github/workflows/shellcheck.yml @@ -0,0 +1,25 @@ +name: Shellcheck + +on: + push: + branches: + - main + paths: + - 'scripts/**/*.sh' + pull_request: + paths: + - 'scripts/**/*.sh' + workflow_dispatch: + +jobs: + shellcheck: + name: Check shell scripts + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + with: + sparse-checkout: | + scripts + + - name: Run shellcheck + run: find scripts -name '*.sh' -type f -print0 | xargs -0 shellcheck --severity=warning diff --git a/.gitignore b/.gitignore index 94f4582bb..65cfbacbb 100644 --- a/.gitignore +++ b/.gitignore @@ -14,3 +14,4 @@ /docs/Std-manifest.json.hash /docs/Std-manifest.json.trace .DS_Store +.claude \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e625fcc4c..e2591bcb7 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,3 +1,46 @@ +**Table of Contents** + +- [Contributing to CSLib](#contributing-to-cslib) +- [Contribution model](#contribution-model) +- [Style and documentation](#style-and-documentation) + - [Variable names](#variable-names) + - [Proof style and golfing :golf:](#proof-style-and-golfing-golf) + - [Notation](#notation) + - [Documentation](#documentation) +- [Design principles](#design-principles) + - [Reuse](#reuse) +- [Continuous Integration](#continuous-integration) + - [Pull Request Titles](#pull-request-titles) + - [Testing](#testing) + - [Linting](#linting) + - [Imports](#imports) +- [Getting started](#getting-started) + - [Before you start: coordination to avoid rework](#before-you-start-coordination-to-avoid-rework) + - [Finding tasks](#finding-tasks) + - [Working groups](#working-groups) + - [Proposing a new working group](#proposing-a-new-working-group) + - [Examples of welcome contributions](#examples-of-welcome-contributions) + - [Pillar 1: Formalising Computer Science in Lean](#pillar-1-formalising-computer-science-in-lean) + - [Algorithms and Data Structures](#algorithms-and-data-structures) + - [Verified data structures with time complexity (Batteries + Time Monad)](#verified-data-structures-with-time-complexity-batteries--time-monad) + - [Graph algorithms and graph foundations](#graph-algorithms-and-graph-foundations) + - [APIs for algorithmic paradigms](#apis-for-algorithmic-paradigms) + - [Programming Languages, Models of Computation and Interaction](#programming-languages-models-of-computation-and-interaction) + - [Logics](#logics) + - [Semantics and program equivalences](#semantics-and-program-equivalences) + - [Semantic frameworks](#semantic-frameworks) + - [Program equivalences](#program-equivalences) + - [Pillar 2: Code reasoning](#pillar-2-code-reasoning) + - [Contributing Boole examples](#contributing-boole-examples) + - [Boole specifications](#boole-specifications) + - [Issue labels for Boole](#issue-labels-for-boole) + - [Front ends for Boole](#front-ends-for-boole) + - [Back ends for Boole](#back-ends-for-boole) + - [Implementing verification paradigms](#implementing-verification-paradigms) + - [Lean automation](#lean-automation) + - [The role of AI](#the-role-of-ai) + + # Contributing to CSLib It's great that you're interested in contributing to CSLib! :tada: @@ -12,9 +55,11 @@ Each PR needs to be approved by at least one relevant maintainer. You can read t If you are adding something new to CSLib and are in doubt about it, you are very welcome to contact us on the [Lean prover Zulip chat](https://leanprover.zulipchat.com/). +If you are unfamiliar with CSLib as a whole and want to understand how to get started, please see [Getting started](#getting-started). + # Style and documentation -We generally follow the [mathlib style for coding and documentation](https://leanprover-community.github.io/contribute/style.html), so please read that as well. Some things worth mentioning and conventions specific to this library are explained next. +We generally follow the [mathlib style for coding and documentation](https://leanprover-community.github.io/contribute/style.html), so please read that as well. Some things worth mentioning and conventions specific to CSLib are explained next. ## Variable names @@ -32,6 +77,19 @@ The library hosts a number of languages with their own syntax and semantics, so - If you want notation for a common concept, like reductions or transitions in an operational semantics, try to find an existing typeclass that fits your need. - If you define new notation that in principle can apply to different types (e.g., syntax or semantics of other languages), keep it locally scoped or create a new typeclass. +## Documentation + +Document your definitions and theorems to ease both use and reviewing. +When formalising a concept that is explained in a published resource, please reference the resource in your documentation. + +# Design principles + +## Reuse + +A central focus of CSLib is providing reusable abstractions and their consistent usage across the +library. New definitions should instantiate existing abstractions whenever appropriate: a +labelled transition system should use `LTS`, etc. + # Continuous Integration There are a number of checks that run in continuous integration. Here is a brief guide that includes @@ -64,9 +122,210 @@ CSLib uses a number of linters, mostly inherited from Batteries and Mathlib. The ## Imports -CSLib tests for minimized imports using `lake exe shake`, which also comes with a `--fix` option. -Note that this tooling is not aware of imports required for tactics or typeclasses. Such imports may -be specified as exceptions in [scripts/noshake.json](/scripts/noshake.json). - There is a also a test that [Cslib.lean](/Cslib.lean) imports all files. You can ensure this by -running `lake exe mk_all` locally, which will make the required changes. +running `lake exe mk_all --module` locally, which will make the required changes. + +# Getting started + +CSLib is a community effort. To understand its scope and vision, please read the [CSLib whitepaper](https://arxiv.org/abs/2602.04846). +For an overview of its technical approach to reuse, continuous integration, and proof automation, please read the [Computer Science as Infrastructure paper](https://arxiv.org/abs/2602.15078). + +Key project links include: + +- Website: https://www.cslib.io/ +- GitHub issues + PRs: https://github.com/leanprover/cslib +- Open contribution board: https://github.com/leanprover/cslib/projects?query=is%3Aopen +- Community discussion (Lean Community Zulip): https://leanprover.zulipchat.com/ + - CSLib channels are the recommended place to coordinate and ask questions. + +## Before you start: coordination to avoid rework + +Most contributions are welcome as straightforward PRs. However, **for any major development**, it is strongly recommended to discuss first on Zulip (or via a GitHub issue) so that the scope, dependencies, and placement in the library are aligned. + +Examples of work that should be discussed first: + +- New cross-cutting abstractions / typeclasses / notation schemes. +- New foundational frameworks. +- Major refactorings. +- New frontend or backend components for CSLib's verification infrastructure. +- Proposals for new working groups (see below). + +## Finding tasks + +If you are looking for a concrete starting point, please look at: + +- The CSLib Zulip channels. +- Our [GitHub issues](https://github.com/leanprover/cslib/issues). + + +## Working groups + +CSLib is structured to support multiple topic-focused efforts. We organise sustained work via **working groups** (informal or formal), which typically have a topic scope and a Zulip topic/channel for coordination. + +If you want to **join** a working group, start by posting on the relevant CSLib Zulip channel describing your background and what you want to contribute. + +### Proposing a new working group + +If you want to propose a new working group, write a short proposal (Zulip message or GitHub issue is fine) that includes: + +- **Topic**: What do you want to do? +- **Execution plan**: What is your execution plan? +- **Collaborators**: If some group or people are already planning to work on the topic, write them. + +The goal is to keep proposals lightweight while ensuring CSLib remains coherent and reusable. + +## Examples of welcome contributions + +Here you can find some (non-exhaustive) examples of topics looking for contributions. + +### Pillar 1: Formalising Computer Science in Lean + +Pillar 1 is about the formalisation of Computer Science as reusable infrastructure. This includes, but is not limited to, models of computation, semantics, logics, algorithms, data structures, metatheory, and supporting mathematics. + +#### Algorithms and Data Structures + +##### Verified data structures with time complexity (Batteries + Time Monad) + +A concrete and high-impact track is to verify implementations and time complexity bounds for [data structures from Batteries](https://github.com/leanprover-community/batteries/tree/main/Batteries/Data). + +Examples of candidate targets: + +- List and Array developments +- Binary heap +- Binomial heap +- Union find +- Red-black trees + +##### Graph algorithms and graph foundations + +- Foundational definitions (directed/undirected simple graphs, etc.) +- Core algorithms and their correctness proofs: + - DFS, topological sorting, SCC + - shortest paths, APSP + - max-flow + - minimum spanning tree + - spanners + - Gomory–Hu trees + +##### APIs for algorithmic paradigms + +Reusable APIs that support many concrete algorithms. + +- Divide-and-conquer + - Master theorem +- Dynamic programming + - generic DP API patterns + - quadrangle inequality (Yao ’80) + - SMAWK algorithm + +#### Programming Languages, Models of Computation and Interaction + +- Automata (on finite and infinite words) +- Choreographic languages +- Lambda calculi +- Petri Nets +- Process calculi, like CCS and pi-calculus +- Frameworks for language encodings (compilers, etc.). +- Proof techniques for the correctness of encodings. + +#### Logics + +We aim at formalising a number of logics of different kinds, including linear logic, modal logics, etc. + +We welcome proofs of logical equivalences and metatheoretical results such as identity expansion, cut elimiation, etc. + +Examples of interesting logics include: +- Linear logic +- Temporal logic +- Separation logic + +#### Semantics and program equivalences + +##### Semantic frameworks +- Denotational semantics +- Operational semantics, including results on labelled transition systems and reduction systems + +##### Program equivalences + +- Bisimulation +- May/Must testing +- Trace equivalence + +### Pillar 2: Code reasoning + +Pillar 2 is about infrastructure for reasoning about code in mainstream programming languages via intermediate representations, VC generation, and automation. + +We are interested in collecting a large number of programs in Boole (see the [CSLib whitepaper](https://arxiv.org/abs/2602.04846) for Boole's vision). + +You can try the Boole sandbox examples at . + +#### Contributing Boole examples + +We are interested in collecting a large number of programs in Boole. + +If you'd like to contribute examples, please propose and coordinate on the [Zulip channel for code reasoning](https://leanprover.zulipchat.com/#narrow/channel/563135-CSLib.3A-Code-Reasoning) first (especially if the example requires new features). + +We separate Boole examples into two directories: + +- examples that work with the current Boole back end +- examples that are broken or contain features not yet implemented + +Contributions to both sets are valuable: working examples demonstrate capabilities; 'broken' examples identify missing features and bottlenecks. + +#### Boole specifications + +Currently, Boole specifications are based on Strata Core: + +A key long-term goal is to support specifications that reference arbitrary Lean concepts, especially those formalised as part of CSLib Pillar 1. Designing this cleanly within the Strata framework is a challenging and valuable project. + +#### Issue labels for Boole + +If you have feature requests for Boole, file an issue with title `feat(Boole): `. + +For bugs, errors, or other issues, file an issue with label `Boole`. + +#### Front ends for Boole + +We are interested in developing translations from real-world programming languages to Boole. + +- Prototype translations are welcome to explore feasibility and identify design constraints. +- If you want to propose a translation for inclusion in CSLib, coordinate on Zulip. + +We expect initial translations will be ad hoc and trusted. The eventual goal is to formalize the semantics of front ends and prove (as a Lean metatheorem) that translations preserve semantics. + +#### Back ends for Boole + +We are interested in building Boole back ends that take as input Boole programs with formal specifications and construct proof obligations in Lean, which, if proved, ensure that the program meets its specification. + +- A prototype translation based on a deep embedding in Strata exists, but is not fully foundational. +- A major long-term goal is to prove Lean meta-theorems showing that proving the verification conditions ensures correctness of the Boole program. + +Alternative directions are welcome, e.g.: + +- Exploring a shallow embedding approach +- Leveraging Loom for more foundational pipelines + +A back end for **time complexity analysis** is also of interest. + +#### Implementing verification paradigms + +The formal methods community has a wide range of verification techniques that could be valuable in the Boole ecosystem, e.g.: + +- proof by refinement +- techniques for program equivalence +- other deductive verification paradigms + +#### Lean automation + +Since Boole back ends reduce correctness questions to Lean conjectures, automation is central. + +We already rely on key techniques such as `grind` and `lean-smt`. Additional work on automation for conjectures generated from Boole is welcome, including domain-specific automation that remains performant and readable. + +#### The role of AI + +There are two primary areas where generative AI can help: + +- generating/refining specifications (at the front-end or Boole level) +- helping to prove Lean conjectures + +Other creative uses of AI are welcome, but contributions should remain reviewable and maintainable. \ No newline at end of file diff --git a/Cslib.lean b/Cslib.lean index 8c6984eb9..be342da0d 100644 --- a/Cslib.lean +++ b/Cslib.lean @@ -1,11 +1,19 @@ -module +module -- shake: keep-all -public import Cslib.Algorithms.Lean.MergeSort.MergeSort -public import Cslib.Algorithms.Lean.TimeM +public import Cslib.AlgorithmsTheory.Algorithms.ListInsertionSort +public import Cslib.AlgorithmsTheory.Algorithms.ListLinearSearch +public import Cslib.AlgorithmsTheory.Algorithms.ListOrderedInsert +public import Cslib.AlgorithmsTheory.Algorithms.MergeSort +public import Cslib.AlgorithmsTheory.Lean.MergeSort.MergeSort +public import Cslib.AlgorithmsTheory.Lean.TimeM +public import Cslib.AlgorithmsTheory.Models.ListComparisonSearch +public import Cslib.AlgorithmsTheory.Models.ListComparisonSort +public import Cslib.AlgorithmsTheory.QueryModel public import Cslib.Computability.Automata.Acceptors.Acceptor public import Cslib.Computability.Automata.Acceptors.OmegaAcceptor public import Cslib.Computability.Automata.DA.Basic public import Cslib.Computability.Automata.DA.Buchi +public import Cslib.Computability.Automata.DA.Congr public import Cslib.Computability.Automata.DA.Prod public import Cslib.Computability.Automata.DA.ToNA public import Cslib.Computability.Automata.EpsilonNA.Basic @@ -16,15 +24,25 @@ public import Cslib.Computability.Automata.NA.BuchiInter public import Cslib.Computability.Automata.NA.Concat public import Cslib.Computability.Automata.NA.Hist public import Cslib.Computability.Automata.NA.Loop +public import Cslib.Computability.Automata.NA.Pair public import Cslib.Computability.Automata.NA.Prod public import Cslib.Computability.Automata.NA.Sum public import Cslib.Computability.Automata.NA.ToDA public import Cslib.Computability.Automata.NA.Total +public import Cslib.Computability.Languages.Congruences.BuchiCongruence +public import Cslib.Computability.Languages.Congruences.RightCongruence public import Cslib.Computability.Languages.ExampleEventuallyZero public import Cslib.Computability.Languages.Language public import Cslib.Computability.Languages.OmegaLanguage public import Cslib.Computability.Languages.OmegaRegularLanguage public import Cslib.Computability.Languages.RegularLanguage +public import Cslib.Computability.URM.Basic +public import Cslib.Computability.URM.Computable +public import Cslib.Computability.URM.Defs +public import Cslib.Computability.URM.Execution +public import Cslib.Computability.URM.StandardForm +public import Cslib.Computability.URM.StraightLine +public import Cslib.Foundations.Combinatorics.InfiniteGraphRamsey public import Cslib.Foundations.Control.Monad.Free public import Cslib.Foundations.Control.Monad.Free.Effects public import Cslib.Foundations.Control.Monad.Free.Fold @@ -36,7 +54,9 @@ public import Cslib.Foundations.Data.OmegaSequence.Flatten public import Cslib.Foundations.Data.OmegaSequence.InfOcc public import Cslib.Foundations.Data.OmegaSequence.Init public import Cslib.Foundations.Data.OmegaSequence.Temporal +public import Cslib.Foundations.Data.RelatesInSteps public import Cslib.Foundations.Data.Relation +public import Cslib.Foundations.Data.Set.Saturation public import Cslib.Foundations.Lint.Basic public import Cslib.Foundations.Semantics.FLTS.Basic public import Cslib.Foundations.Semantics.FLTS.FLTSToLTS @@ -46,7 +66,8 @@ public import Cslib.Foundations.Semantics.LTS.Basic public import Cslib.Foundations.Semantics.LTS.Bisimulation public import Cslib.Foundations.Semantics.LTS.Simulation public import Cslib.Foundations.Semantics.LTS.TraceEq -public import Cslib.Foundations.Semantics.ReductionSystem.Basic +public import Cslib.Foundations.Syntax.Congruence +public import Cslib.Foundations.Syntax.Context public import Cslib.Foundations.Syntax.HasAlphaEquiv public import Cslib.Foundations.Syntax.HasSubstitution public import Cslib.Foundations.Syntax.HasWellFormed @@ -58,6 +79,7 @@ public import Cslib.Languages.CombinatoryLogic.Basic public import Cslib.Languages.CombinatoryLogic.Confluence public import Cslib.Languages.CombinatoryLogic.Defs public import Cslib.Languages.CombinatoryLogic.Evaluation +public import Cslib.Languages.CombinatoryLogic.List public import Cslib.Languages.CombinatoryLogic.Recursion public import Cslib.Languages.LambdaCalculus.LocallyNameless.Context public import Cslib.Languages.LambdaCalculus.LocallyNameless.Fsub.Basic @@ -72,8 +94,10 @@ public import Cslib.Languages.LambdaCalculus.LocallyNameless.Stlc.Safety public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.Basic public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.FullBeta public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.FullBetaConfluence +public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.LcAt public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.Properties public import Cslib.Languages.LambdaCalculus.Named.Untyped.Basic +public import Cslib.Logics.HML.Basic public import Cslib.Logics.LinearLogic.CLL.Basic public import Cslib.Logics.LinearLogic.CLL.CutElimination public import Cslib.Logics.LinearLogic.CLL.EtaExpansion diff --git a/Cslib/Algorithms/Lean/TimeM.lean b/Cslib/Algorithms/Lean/TimeM.lean deleted file mode 100644 index ff7f0f738..000000000 --- a/Cslib/Algorithms/Lean/TimeM.lean +++ /dev/null @@ -1,97 +0,0 @@ -/- -Copyright (c) 2025 Sorrachai Yingchareonthawornhcai. All rights reserved. -Released under Apache 2.0 license as described in the file LICENSE. -Authors: Sorrachai Yingchareonthawornhcai --/ - -module - -import Cslib.Init - -@[expose] public section - -/-! - -# TimeM: Time Complexity Monad -`TimeM α` represents a computation that produces a value of type `α` and tracks its time cost. - -## Design Principles -1. **Pure inputs, timed outputs**: Functions take plain values and return `TimeM` results -2. **Time annotations are trusted**: The `time` field is NOT verified against actual cost. - You must manually ensure annotations match the algorithm's complexity in your cost model. -3. **Separation of concerns**: Prove correctness properties on `.ret`, prove complexity on `.time` - -## Cost Model -**Document your cost model explicitly** Decide and be consistent about: -- **What costs 1 unit?** (comparison, arithmetic operation, etc.) -- **What is free?** (variable lookup, pattern matching, etc.) -- **Recursive calls:** Do you charge for the call itself? - -## Notation -- **`✓`** : A tick of time, see `tick`. -- **`⟪tm⟫`** : Extract the pure value from a `TimeM` computation (notation for `tm.ret`) - -## References - -See [Danielsson2008] for the discussion. --/ -namespace Cslib.Algorithms.Lean - -/-- A monad for tracking time complexity of computations. -`TimeM α` represents a computation that returns a value of type `α` -and accumulates a time cost (represented as a natural number). -/ -structure TimeM (α : Type*) where - /-- The return value of the computation -/ - ret : α - /-- The accumulated time cost of the computation -/ - time : ℕ - -namespace TimeM - -/-- Lifts a pure value into a `TimeM` computation with zero time cost. -/ -@[scoped grind =] -def pure {α} (a : α) : TimeM α := - ⟨a, 0⟩ - -/-- Sequentially composes two `TimeM` computations, summing their time costs. -/ -@[scoped grind =] -def bind {α β} (m : TimeM α) (f : α → TimeM β) : TimeM β := - let r := f m.ret - ⟨r.ret, m.time + r.time⟩ - -instance : Monad TimeM where - pure := pure - bind := bind - -/-- Creates a `TimeM` computation with a specified value and time cost. -The time cost defaults to 1 if not provided. -/ -@[simp, grind =] def tick {α : Type*} (a : α) (c : ℕ := 1) : TimeM α := ⟨a, c⟩ - -/-- Notation for `tick` with explicit time cost: `✓ a, c` -/ -scoped notation "✓" a:arg ", " c:arg => tick a c - -/-- Notation for `tick` with default time cost of 1: `✓ a` -/ -scoped notation "✓" a:arg => tick a - -/-- Notation for extracting the return value from a `TimeM` computation: `⟪tm⟫` -/ -scoped notation:max "⟪" tm "⟫" => (TimeM.ret tm) - -/-- A unit computation with time cost 1. -/ -def tickUnit : TimeM Unit := - ✓ () - -@[simp] theorem time_of_pure {α} (a : α) : (pure a).time = 0 := rfl - -@[simp] theorem time_of_bind {α β} (m : TimeM α) (f : α → TimeM β) : - (TimeM.bind m f).time = m.time + (f m.ret).time := rfl - -@[simp] theorem time_of_tick {α} (a : α) (c : ℕ) : (tick a c).time = c := rfl - -@[simp] theorem ret_bind {α β} (m : TimeM α) (f : α → TimeM β) : - (TimeM.bind m f).ret = (f m.ret).ret := rfl - --- this allow us to simplify the chain of monadic compositions -attribute [simp] Bind.bind Pure.pure TimeM.pure TimeM.bind - -end TimeM -end Cslib.Algorithms.Lean diff --git a/Cslib/AlgorithmsTheory/Algorithms/ListInsertionSort.lean b/Cslib/AlgorithmsTheory/Algorithms/ListInsertionSort.lean new file mode 100644 index 000000000..86c85c3ee --- /dev/null +++ b/Cslib/AlgorithmsTheory/Algorithms/ListInsertionSort.lean @@ -0,0 +1,100 @@ +/- +Copyright (c) 2026 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas, Eric Wieser +-/ +module + +public import Cslib.AlgorithmsTheory.QueryModel +public import Cslib.AlgorithmsTheory.Algorithms.ListOrderedInsert +public import Mathlib + +@[expose] public section + +/-! +# Insertion sort in a list + +In this file we state and prove the correctness and complexity of insertion sort in lists under +the `SortOps` model. This insertionSort evaluates identically to the upstream version of +`List.insertionSort` +-- + +## Main Definitions + +- `insertionSort` : Insertion sort algorithm in the `SortOps` query model + +## Main results + +- `insertionSort_eval`: `insertionSort` evaluates identically to `List.insertionSort`. +- `insertionSort_permutation` : `insertionSort` outputs a permutation of the input list. +- `insertionSort_sorted` : `insertionSort` outputs a sorted list. +- `insertionSort_complexity` : `insertionSort` takes at most n * (n + 1) comparisons and + (n + 1) * (n + 2) list head-insertions. +-/ + +namespace Cslib + +namespace Algorithms + +open Prog + +/-- The insertionSort algorithms on lists with the `SortOps` query. -/ +def insertionSort (l : List α) : Prog (SortOps α) (List α) := + match l with + | [] => return [] + | x :: xs => do + let rest ← insertionSort xs + insertOrd x rest + +@[simp] +theorem insertionSort_eval (l : List α) (le : α → α → Prop) [DecidableRel le] : + (insertionSort l).eval (sortModel le) = l.insertionSort le := by + induction l with simp_all [insertionSort] + +theorem insertionSort_permutation (l : List α) (le : α → α → Prop) [DecidableRel le] : + ((insertionSort l).eval (sortModel le)).Perm l := by + simp [insertionSort_eval, List.perm_insertionSort] + +theorem insertionSort_sorted + (l : List α) (le : α → α → Prop) [DecidableRel le] [Std.Total le] [IsTrans α le] : + ((insertionSort l).eval (sortModel le)).Pairwise le := by + simpa using List.pairwise_insertionSort _ _ + +lemma insertionSort_length (l : List α) (le : α → α → Prop) [DecidableRel le] : + ((insertionSort l).eval (sortModel le)).length = l.length := by + simp + +lemma insertionSort_time_compares (head : α) (tail : List α) (le : α → α → Prop) [DecidableRel le] : + ((insertionSort (head :: tail)).time (sortModel le)).compares = + ((insertionSort tail).time (sortModel le)).compares + + ((insertOrd head (tail.insertionSort le)).time (sortModel le)).compares := by + simp [insertionSort] + +lemma insertionSort_time_inserts (head : α) (tail : List α) (le : α → α → Prop) [DecidableRel le] : + ((insertionSort (head :: tail)).time (sortModel le)).inserts = + ((insertionSort tail).time (sortModel le)).inserts + + ((insertOrd head (tail.insertionSort le)).time (sortModel le)).inserts := by + simp [insertionSort] + +theorem insertionSort_complexity (l : List α) (le : α → α → Prop) [DecidableRel le] : + ((insertionSort l).time (sortModel le)) + ≤ ⟨l.length * (l.length + 1), (l.length + 1) * (l.length + 2)⟩ := by + induction l with + | nil => + simp [insertionSort] + | cons head tail ih => + have h := insertOrd_complexity_upper_bound (tail.insertionSort le) head le + simp_all only [List.length_cons, List.length_insertionSort] + obtain ⟨ih₁,ih₂⟩ := ih + obtain ⟨h₁,h₂⟩ := h + refine ⟨?_, ?_⟩ + · clear h₂ + rw [insertionSort_time_compares] + nlinarith [ih₁, h₁] + · clear h₁ + rw [insertionSort_time_inserts] + nlinarith [ih₂, h₂] + +end Algorithms + +end Cslib diff --git a/Cslib/AlgorithmsTheory/Algorithms/ListLinearSearch.lean b/Cslib/AlgorithmsTheory/Algorithms/ListLinearSearch.lean new file mode 100644 index 000000000..0a1f5c3a9 --- /dev/null +++ b/Cslib/AlgorithmsTheory/Algorithms/ListLinearSearch.lean @@ -0,0 +1,83 @@ +/- +Copyright (c) 2026 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas, Eric Wieser +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel +public import Cslib.AlgorithmsTheory.Models.ListComparisonSearch +public import Mathlib + +@[expose] public section + +/-! +# Linear search in a list + +In this file we state and prove the correctness and complexity of linear search in lists under +the `ListSearch` model. +-- + +## Main Definitions + +- `listLinearSearch` : Linear search algorithm in the `ListSearch` query model + +## Main results + +- `listLinearSearch_eval`: `insertOrd` evaluates identically to `List.contains`. +- `listLinearSearchM_time_complexity_upper_bound` : `linearSearch` takes at most `n` + comparison operations +- `listLinearSearchM_time_complexity_lower_bound` : There exist lists on which `linearSearch` needs + `n` comparisons +-/ +namespace Cslib + +namespace Algorithms + +open Prog + +open ListSearch in +/-- Linear Search in Lists on top of the `ListSearch` query model. -/ +def listLinearSearch (l : List α) (x : α) : Prog (ListSearch α) Bool := do + match l with + | [] => return false + | l :: ls => + let cmp : Bool ← compare (l :: ls) x + if cmp then + return true + else + listLinearSearch ls x + +@[simp, grind =] +lemma listLinearSearch_eval [BEq α] (l : List α) (x : α) : + (listLinearSearch l x).eval ListSearch.natCost = l.contains x := by + fun_induction l.elem x with simp_all [listLinearSearch] + +lemma listLinearSearchM_correct_true [BEq α] [LawfulBEq α] (l : List α) + {x : α} (x_mem_l : x ∈ l) : (listLinearSearch l x).eval ListSearch.natCost = true := by + simp [x_mem_l] + +lemma listLinearSearchM_correct_false [BEq α] [LawfulBEq α] (l : List α) + {x : α} (x_mem_l : x ∉ l) : (listLinearSearch l x).eval ListSearch.natCost = false := by + simp [x_mem_l] + +lemma listLinearSearchM_time_complexity_upper_bound [BEq α] (l : List α) (x : α) : + (listLinearSearch l x).time ListSearch.natCost ≤ l.length := by + fun_induction l.elem x with + | case1 => simp [listLinearSearch] + | case2 => simp_all [listLinearSearch] + | case3 => + simp_all [listLinearSearch] + grind + +-- This statement is wrong +lemma listLinearSearchM_time_complexity_lower_bound [DecidableEq α] [Nonempty α] : + ∃ l : List α, ∃ x : α, (listLinearSearch l x).time ListSearch.natCost = l.length := by + inhabit α + refine ⟨[], default, ?_⟩ + simp_all [ListSearch.natCost, listLinearSearch] + +end Algorithms + +end Cslib diff --git a/Cslib/AlgorithmsTheory/Algorithms/ListOrderedInsert.lean b/Cslib/AlgorithmsTheory/Algorithms/ListOrderedInsert.lean new file mode 100644 index 000000000..4a0ebfb93 --- /dev/null +++ b/Cslib/AlgorithmsTheory/Algorithms/ListOrderedInsert.lean @@ -0,0 +1,96 @@ +/- +Copyright (c) 2026 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas, Eric Wieser +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel +public import Cslib.AlgorithmsTheory.Models.ListComparisonSort +public import Mathlib + +@[expose] public section + +/-! +# Ordered insertion in a list + +In this file we state and prove the correctness and complexity of ordered insertions in lists under +the `SortOps` model. This ordered insert is later used in `insertionSort` mirroring the structure +in upstream libraries for the pure lean code versions of these declarations. + +-- + +## Main Definitions + +- `insertOrd` : ordered insert algorithm in the `SortOps` query model + +## Main results + +- `insertOrd_eval`: `insertOrd` evaluates identically to `List.orderedInsert`. +- `insertOrd_complexity_upper_bound` : Shows that `insertOrd` takes at most `n` comparisons, + and `n + 1` list head-insertion operations. +- `insertOrd_sorted` : Applying `insertOrd` to a sorted list yields a sorted list. +-/ + +namespace Cslib +namespace Algorithms + +open Prog + +open SortOps + +/-- +Performs ordered insertion of `x` into a list `l` in the `SortOps` query model. +If `l` is sorted, then `x` is inserted into `l` such that the resultant list is also sorted. +-/ +def insertOrd (x : α) (l : List α) : Prog (SortOps α) (List α) := do + match l with + | [] => insertHead x l + | a :: as => + if (← cmpLE x a : Bool) then + insertHead x (a :: as) + else + let res ← insertOrd x as + insertHead a res + +@[simp] +lemma insertOrd_eval (x : α) (l : List α) (le : α → α → Prop) [DecidableRel le] : + (insertOrd x l).eval (sortModel le) = l.orderedInsert le x := by + induction l with + | nil => + simp [insertOrd, sortModel] + | cons head tail ih => + by_cases h_head : le x head + · simp [insertOrd, h_head] + · simp [insertOrd, h_head, ih] + +-- to upstream +@[simp] +lemma _root_.List.length_orderedInsert (x : α) (l : List α) [DecidableRel r] : + (l.orderedInsert r x).length = l.length + 1 := by + induction l <;> grind + +theorem insertOrd_complexity_upper_bound + (l : List α) (x : α) (le : α → α → Prop) [DecidableRel le] : + (insertOrd x l).time (sortModel le) ≤ ⟨l.length, l.length + 1⟩ := by + induction l with + | nil => + simp [insertOrd, sortModel] + | cons head tail ih => + obtain ⟨ih_compares, ih_inserts⟩ := ih + rw [insertOrd] + by_cases h_head : le x head + · simp [h_head] + · simp [h_head] + grind + +lemma insertOrd_sorted + (l : List α) (x : α) (le : α → α → Prop) [DecidableRel le] [Std.Total le] [IsTrans _ le] : + l.Pairwise le → ((insertOrd x l).eval (sortModel le)).Pairwise le := by + rw [insertOrd_eval] + exact List.Pairwise.orderedInsert _ _ + +end Algorithms + +end Cslib diff --git a/Cslib/AlgorithmsTheory/Algorithms/MergeSort.lean b/Cslib/AlgorithmsTheory/Algorithms/MergeSort.lean new file mode 100644 index 000000000..a2a235984 --- /dev/null +++ b/Cslib/AlgorithmsTheory/Algorithms/MergeSort.lean @@ -0,0 +1,207 @@ +/- +Copyright (c) 2026 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas, Eric Wieser +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel +public import Cslib.AlgorithmsTheory.Models.ListComparisonSort +public import Cslib.AlgorithmsTheory.Lean.MergeSort.MergeSort +import all Cslib.AlgorithmsTheory.Lean.MergeSort.MergeSort +import all Init.Data.List.Sort.Basic +@[expose] public section + +/-! +# Merge sort in a list + +In this file we state and prove the correctness and complexity of merge sort in lists under +the `SortOps` model. +-- + +## Main Definitions +- `merge` : Merge algorithm for merging two sorted lists in the `SortOps` query model +- `mergeSort` : Merge sort algorithm in the `SortOps` query model + +## Main results + +- `mergeSort_eval`: `mergeSort` evaluates identically to the priva. +- `mergeSort_sorted` : `mergeSort` outputs a sorted list. +- `mergeSort_perm` : The output of `mergeSort` is a permutation of the input list +- `mergeSort_complexity` : `mergeSort` takes at most n * ⌈log n⌉ comparisons. +-/ +namespace Cslib.Algorithms + +open SortOpsCmp + +/-- Merge two sorted lists using comparisons in the query monad. -/ +@[simp] +def merge (x y : List α) : Prog (SortOpsCmp α) (List α) := do + match x,y with + | [], ys => return ys + | xs, [] => return xs + | x :: xs', y :: ys' => do + let cmp : Bool ← cmpLE x y + if cmp then + let rest ← merge xs' (y :: ys') + return (x :: rest) + else + let rest ← merge (x :: xs') ys' + return (y :: rest) + +lemma merge_timeComplexity (x y : List α) (le : α → α → Prop) [DecidableRel le] : + (merge x y).time (sortModelNat le) ≤ x.length + y.length := by + fun_induction List.merge x y (le · ·) with + | case1 => simp + | case2 => simp + | case3 x xs y ys hxy ihx => + suffices 1 + (merge xs (y :: ys)).time (sortModelNat le) ≤ xs.length + 1 + (ys.length + 1) by + simpa [hxy] + grind + | case4 x xs y ys hxy ihy => + suffices 1 + (merge (x :: xs) ys).time (sortModelNat le) ≤ xs.length + 1 + (ys.length + 1) by + simpa [hxy] + grind + +@[simp] +lemma merge_eval (x y : List α) (le : α → α → Prop) [DecidableRel le] : + (merge x y).eval (sortModelNat le) = List.merge x y (le · ·) := by + fun_induction List.merge with + | case1 => simp + | case2 => simp + | case3 x xs y ys ihx ihy => simp_all [merge] + | case4 x xs y ys hxy ihx => + rw [decide_eq_true_iff] at hxy + simp_all [merge, -not_le] + +lemma merge_length (x y : List α) (le : α → α → Prop) [DecidableRel le] : + ((merge x y).eval (sortModelNat le)).length = x.length + y.length := by + rw [merge_eval] + apply List.length_merge + +/-- +The `mergeSort` algorithm in the `SortOps` query model. It sorts the input list +according to the mergeSort algorithm. +-/ +def mergeSort (xs : List α) : Prog (SortOpsCmp α) (List α) := do + if xs.length < 2 then return xs + else + let half := xs.length / 2 + let left := xs.take half + let right := xs.drop half + let sortedLeft ← mergeSort left + let sortedRight ← mergeSort right + merge sortedLeft sortedRight + +/-- +The vanilla-lean version of `mergeSortNaive` that is extensionally equal to `mergeSort` +-/ +private def mergeSortNaive (xs : List α) (le : α → α → Prop) [DecidableRel le] : List α := + if xs.length < 2 then xs + else + let sortedLeft := mergeSortNaive (xs.take (xs.length/2)) le + let sortedRight := mergeSortNaive (xs.drop (xs.length/2)) le + List.merge sortedLeft sortedRight (le · ·) + +private proof_wanted mergeSortNaive_eq_mergeSort + [LinearOrder α] (xs : List α) (le : α → α → Prop) [DecidableRel le] : + mergeSortNaive xs le = xs.mergeSort + +private lemma mergeSortNaive_Perm (xs : List α) (le : α → α → Prop) [DecidableRel le] : + (mergeSortNaive xs le).Perm xs := by + fun_induction mergeSortNaive + · simp + · expose_names + rw [←(List.take_append_drop (x.length / 2) x)] + grw [List.merge_perm_append, ← ih1, ← ih2] + +@[simp] +private lemma mergeSort_eval (xs : List α) (le : α → α → Prop) [DecidableRel le] : + (mergeSort xs).eval (sortModelNat le) = mergeSortNaive xs le := by + fun_induction mergeSort with + | case1 xs h => + simp [h, mergeSortNaive, Prog.eval] + | case2 xs h n left right ihl ihr => + rw [mergeSortNaive, if_neg h] + have im := merge_eval left right + simp [ihl, ihr, merge_eval] + rfl + +private lemma mergeSortNaive_length (xs : List α) (le : α → α → Prop) [DecidableRel le] : + (mergeSortNaive xs le).length = xs.length := by + fun_induction mergeSortNaive with + | case1 xs h => + simp + | case2 xs h left right ihl ihr => + rw [List.length_merge] + convert congr($ihl + $ihr) + rw [← List.length_append] + simp + +lemma mergeSort_length (xs : List α) (le : α → α → Prop) [DecidableRel le] : + ((mergeSort xs).eval (sortModelNat le)).length = xs.length := by + rw [mergeSort_eval] + apply mergeSortNaive_length + +lemma merge_sorted_sorted + (xs ys : List α) (le : α → α → Prop) [DecidableRel le] [Std.Total le] [IsTrans _ le] + (hxs_mono : xs.Pairwise le) (hys_mono : ys.Pairwise le) : + ((merge xs ys).eval (sortModelNat le)).Pairwise le := by + rw [merge_eval] + grind [hxs_mono.merge hys_mono] + +private lemma mergeSortNaive_sorted + (xs : List α) (le : α → α → Prop) [DecidableRel le] [Std.Total le] [IsTrans _ le] : + (mergeSortNaive xs le).Pairwise le := by + fun_induction mergeSortNaive with + | case1 xs h => + match xs with | [] | [x] => simp + | case2 xs h left right ihl ihr => + simpa using ihl.merge ihr + +theorem mergeSort_sorted + (xs : List α) (le : α → α → Prop) [DecidableRel le] [Std.Total le] [IsTrans _ le] : + ((mergeSort xs).eval (sortModelNat le)).Pairwise le := by + rw [mergeSort_eval] + apply mergeSortNaive_sorted + +theorem mergeSort_perm (xs : List α) (le : α → α → Prop) [DecidableRel le] : + ((mergeSort xs).eval (sortModelNat le)).Perm xs := by + rw [mergeSort_eval] + apply mergeSortNaive_Perm + +section TimeComplexity + +open Cslib.Algorithms.Lean.TimeM + +-- TODO: reuse the work in `mergeSort_time_le`? +theorem mergeSort_complexity (xs : List α) (le : α → α → Prop) [DecidableRel le] : + (mergeSort xs).time (sortModelNat le) ≤ T (xs.length) := by + fun_induction mergeSort + · simp [T] + · expose_names + simp only [FreeM.bind_eq_bind, Prog.time_bind, mergeSort_eval] + grw [merge_timeComplexity, ih1, ih2, mergeSortNaive_length, mergeSortNaive_length] + set n := x.length + have hleft_len : left.length ≤ n / 2 := by + grind + have hright_len : right.length ≤ (n + 1) / 2 := by + have hright_eq : right.length = n - n / 2 := by + simp [right, n, half, List.length_drop] + rw [hright_eq] + grind + have htleft_len : T left.length ≤ T (n / 2) := T_monotone hleft_len + have htright_len : T right.length ≤ T ((n + 1) / 2) := T_monotone hright_len + grw [htleft_len, htright_len, hleft_len, hright_len] + have hs := some_algebra (n - 2) + have hsub1 : (n - 2) / 2 + 1 = n / 2 := by grind + have hsub2 : 1 + (1 + (n - 2)) / 2 = (n + 1) / 2 := by grind + have hsub3 : (n - 2) + 2 = n := by grind + have hsplit : n / 2 + (n + 1) / 2 = n := by grind + simpa [T, hsub1, hsub2, hsub3, hsplit, Nat.add_assoc, Nat.add_left_comm, Nat.add_comm] + using hs + +end TimeComplexity + +end Cslib.Algorithms diff --git a/Cslib/Algorithms/Lean/MergeSort/MergeSort.lean b/Cslib/AlgorithmsTheory/Lean/MergeSort/MergeSort.lean similarity index 70% rename from Cslib/Algorithms/Lean/MergeSort/MergeSort.lean rename to Cslib/AlgorithmsTheory/Lean/MergeSort/MergeSort.lean index c154a1083..081dbf1b7 100644 --- a/Cslib/Algorithms/Lean/MergeSort/MergeSort.lean +++ b/Cslib/AlgorithmsTheory/Lean/MergeSort/MergeSort.lean @@ -6,7 +6,7 @@ Authors: Sorrachai Yingchareonthawornhcai module -public import Cslib.Algorithms.Lean.TimeM +public import Cslib.AlgorithmsTheory.Lean.TimeM public import Mathlib.Data.Nat.Cast.Order.Ring public import Mathlib.Data.Nat.Lattice public import Mathlib.Data.Nat.Log @@ -17,7 +17,7 @@ public import Mathlib.Data.Nat.Log # MergeSort on a list In this file we introduce `merge` and `mergeSort` algorithms that returns a time monad -over the list `TimeM (List α)`. The time complexity of `mergeSort` is the number of comparisons. +over the list `TimeM ℕ (List α)`. The time complexity of `mergeSort` is the number of comparisons. -- ## Main results @@ -34,12 +34,12 @@ namespace Cslib.Algorithms.Lean.TimeM variable {α : Type} [LinearOrder α] /-- Merges two lists into a single list, counting comparisons as time cost. -Returns a `TimeM (List α)` where the time represents the number of comparisons performed. -/ -def merge : List α → List α → TimeM (List α) +Returns a `TimeM ℕ (List α)` where the time represents the number of comparisons performed. -/ +def merge : List α → List α → TimeM ℕ (List α) | [], ys => return ys | xs, [] => return xs | x::xs', y::ys' => do - let c ← ✓ (x ≤ y : Bool) + ✓ let c := (x ≤ y : Bool) if c then let rest ← merge xs' (y::ys') return (x :: rest) @@ -48,8 +48,8 @@ def merge : List α → List α → TimeM (List α) return (y :: rest) /-- Sorts a list using the merge sort algorithm, counting comparisons as time cost. -Returns a `TimeM (List α)` where the time represents the total number of comparisons. -/ -def mergeSort (xs : List α) : TimeM (List α) := do +Returns a `TimeM ℕ (List α)` where the time represents the total number of comparisons. -/ +def mergeSort (xs : List α) : TimeM ℕ (List α) := do if xs.length < 2 then return xs else let half := xs.length / 2 @@ -63,6 +63,11 @@ section Correctness open List +/-- Our merge computes the one already in mathlib. -/ +@[simp, grind =] +theorem ret_merge (xs ys : List α) : ⟪merge xs ys⟫ = xs.merge ys := by + fun_induction merge with grind [nil_merge, merge_right, cons_merge_cons] + /-- A list is sorted if it satisfies the `Pairwise (· ≤ ·)` predicate. -/ abbrev IsSorted (l : List α) : Prop := List.Pairwise (· ≤ ·) l @@ -71,12 +76,7 @@ abbrev MinOfList (x : α) (l : List α) : Prop := ∀ b ∈ l, x ≤ b @[grind →] theorem mem_either_merge (xs ys : List α) (z : α) (hz : z ∈ ⟪merge xs ys⟫) : z ∈ xs ∨ z ∈ ys := by - fun_induction merge - · exact mem_reverseAux.mp hz - · left - exact hz - · simp_all only [Bind.bind, Pure.pure] - grind + grind [List.mem_merge] theorem min_all_merge (x : α) (xs ys : List α) (hxs : MinOfList x xs) (hys : MinOfList x ys) : MinOfList x ⟪merge xs ys⟫ := by @@ -84,32 +84,22 @@ theorem min_all_merge (x : α) (xs ys : List α) (hxs : MinOfList x xs) (hys : M theorem sorted_merge {l1 l2 : List α} (hxs : IsSorted l1) (hys : IsSorted l2) : IsSorted ⟪merge l1 l2⟫ := by - fun_induction merge l1 l2 with - | case3 => - simp only [Bind.bind, Pure.pure] - grind [pairwise_cons] - | _ => simpa + grind [hxs.merge hys] theorem mergeSort_sorted (xs : List α) : IsSorted ⟪mergeSort xs⟫ := by fun_induction mergeSort xs with | case1 x => - simp only [Pure.pure] rcases x with _ | ⟨a, _ | ⟨b, rest⟩⟩ <;> grind | case2 _ _ _ _ _ ih2 ih1 => exact sorted_merge ih2 ih1 lemma merge_perm (l₁ l₂ : List α) : ⟪merge l₁ l₂⟫ ~ l₁ ++ l₂ := by - fun_induction merge with - | case1 => simp - | case2 => simp - | case3 => - simp only [Bind.bind, Pure.pure] - grind + fun_induction merge with grind [List.merge_perm_append] theorem mergeSort_perm (xs : List α) : ⟪mergeSort xs⟫ ~ xs := by fun_induction mergeSort xs with | case1 => simp | case2 x _ _ left right ih2 ih1 => - simp only [Bind.bind, ret_bind] + simp only [ret_bind] calc ⟪merge ⟪mergeSort left⟫ ⟪mergeSort right⟫⟫ ~ ⟪mergeSort left⟫ ++ ⟪mergeSort right⟫ := by apply merge_perm @@ -134,32 +124,23 @@ def timeMergeSortRec : ℕ → ℕ | 1 => 0 | n@(_+2) => timeMergeSortRec (n/2) + timeMergeSortRec ((n-1)/2 + 1) + n -/-- The ceiling of Nat.log 2 -/ -@[grind =] -def clog2 (n : ℕ) : ℕ := - if n ≤ 1 then 0 else Nat.log 2 (n - 1) + 1 +open Nat (clog) /-- Key Lemma: ⌈log2 ⌈n/2⌉⌉ ≤ ⌈log2 n⌉ - 1 for n > 1 -/ @[grind →] -lemma clog2_half_le (n : ℕ) (h : n > 1) : clog2 ((n + 1) / 2) ≤ clog2 n - 1 := by - have := Nat.sub_mul_div (n + 1) 2 1 - grind [Nat.log_eq_zero_iff, Nat.log_div_base] +lemma clog2_half_le (n : ℕ) (h : n > 1) : clog 2 ((n + 1) / 2) ≤ clog 2 n - 1 := by + grind [Nat.clog_of_one_lt one_lt_two h] /-- Same logic for the floor half: ⌈log2 ⌊n/2⌋⌉ ≤ ⌈log2 n⌉ - 1 -/ @[grind →] -lemma clog2_floor_half_le (n : ℕ) (h : n > 1) : clog2 (n / 2) ≤ clog2 n - 1 := by +lemma clog2_floor_half_le (n : ℕ) (h : n > 1) : clog 2 (n / 2) ≤ clog 2 n - 1 := by apply Nat.le_trans _ (clog2_half_le n h) - simp only [clog2] - split_ifs - · grind - · grind - · grind - · grw [Nat.log_mono_right] - grind + apply Nat.clog_monotone + grind private lemma some_algebra (n : ℕ) : - (n / 2 + 1) * clog2 (n / 2 + 1) + ((n + 1) / 2 + 1) * clog2 ((n + 1) / 2 + 1) + (n + 2) ≤ - (n + 2) * clog2 (n + 2) := by + (n / 2 + 1) * clog 2 (n / 2 + 1) + ((n + 1) / 2 + 1) * clog 2 ((n + 1) / 2 + 1) + (n + 2) ≤ + (n + 2) * clog 2 (n + 2) := by -- 1. Substitution: Let N = n_1 + 2 to clean up the expression let N := n + 2 have hN : N ≥ 2 := by omega @@ -167,15 +148,19 @@ private lemma some_algebra (n : ℕ) : have t1 : n / 2 + 1 = N / 2 := by omega have t2 : (n + 1) / 2 + 1 = (N + 1) / 2 := by omega have t3 : n + 1 + 1 = N := by omega - let k := clog2 N - have h_bound_l : clog2 (N / 2) ≤ k - 1 := clog2_floor_half_le N hN - have h_bound_r : clog2 ((N + 1) / 2) ≤ k - 1 := clog2_half_le N hN + let k := clog 2 N + have h_bound_l : clog 2 (N / 2) ≤ k - 1 := clog2_floor_half_le N hN + have h_bound_r : clog 2 ((N + 1) / 2) ≤ k - 1 := clog2_half_le N hN have h_split : N / 2 + (N + 1) / 2 = N := by omega grw [t1, t2, t3, h_bound_l, h_bound_r, ←Nat.add_mul, h_split] exact Nat.le_refl (N * (k - 1) + N) /-- Upper bound function for merge sort time complexity: `T(n) = n * ⌈log₂ n⌉` -/ -abbrev T (n : ℕ) : ℕ := n * clog2 n +abbrev T (n : ℕ) : ℕ := n * clog 2 n + +lemma T_monotone : Monotone T := by + intro i j h_ij + exact Nat.mul_le_mul h_ij (Nat.clog_monotone 2 h_ij) /-- Solve the recurrence -/ theorem timeMergeSortRec_le (n : ℕ) : timeMergeSortRec n ≤ T n := by @@ -187,36 +172,29 @@ theorem timeMergeSortRec_le (n : ℕ) : timeMergeSortRec n ≤ T n := by have := some_algebra n grind [Nat.add_div_right] -@[simp] theorem merge_ret_length_eq_sum (xs ys : List α) : +theorem merge_ret_length_eq_sum (xs ys : List α) : ⟪merge xs ys⟫.length = xs.length + ys.length := by - fun_induction merge with - | case3 => - simp only [Pure.pure, Bind.bind] - grind - | _ => simp + simp @[simp] theorem mergeSort_same_length (xs : List α) : - ⟪mergeSort xs⟫.length = xs.length := by + ⟪mergeSort xs⟫.length = xs.length := by fun_induction mergeSort · simp - · simp only [Bind.bind] - grind [merge_ret_length_eq_sum] + · grind [List.length_merge] @[simp] theorem merge_time (xs ys : List α) : (merge xs ys).time ≤ xs.length + ys.length := by fun_induction merge with | case3 => - simp only [Pure.pure, Bind.bind] grind | _ => simp theorem mergeSort_time_le (xs : List α) : - (mergeSort xs).time ≤ timeMergeSortRec xs.length := by + (mergeSort xs).time ≤ timeMergeSortRec xs.length := by fun_induction mergeSort with | case1 => - simp only [Pure.pure] grind | case2 _ _ _ _ _ ih2 ih1 => - simp only [Bind.bind, time_of_bind] + simp only [time_bind] grw [merge_time] simp only [mergeSort_same_length] unfold timeMergeSortRec @@ -225,7 +203,7 @@ theorem mergeSort_time_le (xs : List α) : /-- Time complexity of mergeSort -/ theorem mergeSort_time (xs : List α) : let n := xs.length - (mergeSort xs).time ≤ n * clog2 n:= by + (mergeSort xs).time ≤ n * clog 2 n := by grind [mergeSort_time_le, timeMergeSortRec_le] end TimeComplexity diff --git a/Cslib/AlgorithmsTheory/Lean/TimeM.lean b/Cslib/AlgorithmsTheory/Lean/TimeM.lean new file mode 100644 index 000000000..707f9346a --- /dev/null +++ b/Cslib/AlgorithmsTheory/Lean/TimeM.lean @@ -0,0 +1,141 @@ +/- +Copyright (c) 2025 Sorrachai Yingchareonthawornhcai. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Sorrachai Yingchareonthawornhcai, Eric Wieser +-/ + +module + +import Cslib.Init +public import Mathlib.Algebra.Group.Defs + +@[expose] public section + +/-! + +# TimeM: Time Complexity Monad +`TimeM T α` represents a computation that produces a value of type `α` and tracks its time cost. + +`T` is usually instantiated as `ℕ` to count operations, but can be instantiated as `ℝ` to count +actual wall time, or as more complex types in order to model more general costs. + +## Design Principles +1. **Pure inputs, timed outputs**: Functions take plain values and return `TimeM` results +2. **Time annotations are trusted**: The `time` field is NOT verified against actual cost. + You must manually ensure annotations match the algorithm's complexity in your cost model. +3. **Separation of concerns**: Prove correctness properties on `.ret`, prove complexity on `.time` + +## Cost Model +**Document your cost model explicitly** Decide and be consistent about: +- **What costs 1 unit?** (comparison, arithmetic operation, etc.) +- **What is free?** (variable lookup, pattern matching, etc.) +- **Recursive calls:** Do you charge for the call itself? + +## Notation +- **`✓`** : A tick of time, see `tick`. +- **`⟪tm⟫`** : Extract the pure value from a `TimeM` computation (notation for `tm.ret`) + +## References + +See [Danielsson2008] for the discussion. +-/ +namespace Cslib.Algorithms.Lean + +/-- A monad for tracking time complexity of computations. +`TimeM T α` represents a computation that returns a value of type `α` +and accumulates a time cost (represented as a type `T`, typically `ℕ`). -/ +@[ext] +structure TimeM (T : Type*) (α : Type*) where + /-- The return value of the computation -/ + ret : α + /-- The accumulated time cost of the computation -/ + time : T + +namespace TimeM + +/-- Lifts a pure value into a `TimeM` computation with zero time cost. + +Prefer to use `pure` instead of `TimeM.pure`. -/ +protected def pure [Zero T] {α} (a : α) : TimeM T α := + ⟨a, 0⟩ + +instance [Zero T] : Pure (TimeM T) where + pure := TimeM.pure + +/-- Sequentially composes two `TimeM` computations, summing their time costs. + +Prefer to use the `>>=` notation. -/ +protected def bind {α β} [Add T] (m : TimeM T α) (f : α → TimeM T β) : TimeM T β := + let r := f m.ret + ⟨r.ret, m.time + r.time⟩ + +instance [Add T] : Bind (TimeM T) where + bind := TimeM.bind + +instance : Functor (TimeM T) where + map f x := ⟨f x.ret, x.time⟩ + +instance [Add T] : Seq (TimeM T) where + seq f x := ⟨f.ret (x ()).ret, f.time + (x ()).time⟩ + +instance [Add T] : SeqLeft (TimeM T) where + seqLeft x y := ⟨x.ret, x.time + (y ()).time⟩ + +instance [Add T] : SeqRight (TimeM T) where + seqRight x y := ⟨(y ()).ret, x.time + (y ()).time⟩ + +instance [AddZero T] : Monad (TimeM T) where + pure := Pure.pure + bind := Bind.bind + map := Functor.map + seq := Seq.seq + seqLeft := SeqLeft.seqLeft + seqRight := SeqRight.seqRight + +@[simp, grind =] theorem ret_pure {α} [Zero T] (a : α) : (pure a : TimeM T α).ret = a := rfl +@[simp, grind =] theorem ret_bind {α β} [Add T] (m : TimeM T α) (f : α → TimeM T β) : + (m >>= f).ret = (f m.ret).ret := rfl +@[simp, grind =] theorem ret_map {α β} (f : α → β) (x : TimeM T α) : (f <$> x).ret = f x.ret := rfl +@[simp] theorem ret_seqRight {α} (x : TimeM T α) (y : Unit → TimeM T β) [Add T] : + (SeqRight.seqRight x y).ret = (y ()).ret := rfl +@[simp] theorem ret_seqLeft {α} [Add T] (x : TimeM T α) (y : Unit → TimeM T β) : + (SeqLeft.seqLeft x y).ret = x.ret := rfl +@[simp] theorem ret_seq {α β} [Add T] (f : TimeM T (α → β)) (x : Unit → TimeM T α) : + (Seq.seq f x).ret = f.ret (x ()).ret := rfl + +@[simp, grind =] theorem time_bind {α β} [Add T] (m : TimeM T α) (f : α → TimeM T β) : + (m >>= f).time = m.time + (f m.ret).time := rfl +@[simp, grind =] theorem time_pure {α} [Zero T] (a : α) : (pure a : TimeM T α).time = 0 := rfl +@[simp, grind =] theorem time_map {α β} (f : α → β) (x : TimeM T α) : (f <$> x).time = x.time := rfl +@[simp] theorem time_seqRight {α} [Add T] (x : TimeM T α) (y : Unit → TimeM T β) : + (SeqRight.seqRight x y).time = x.time + (y ()).time := rfl +@[simp] theorem time_seqLeft {α} [Add T] (x : TimeM T α) (y : Unit → TimeM T β) : + (SeqLeft.seqLeft x y).time = x.time + (y ()).time := rfl +@[simp] theorem time_seq {α β} [Add T] (f : TimeM T (α → β)) (x : Unit → TimeM T α) : + (Seq.seq f x).time = f.time + (x ()).time := rfl + +/-- `TimeM` is lawful so long as addition in the cost is associative and absorbs zero. -/ +instance [AddMonoid T] : LawfulMonad (TimeM T) := .mk' + (id_map := fun x => rfl) + (pure_bind := fun _ _ => by ext <;> simp) + (bind_assoc := fun _ _ _ => by ext <;> simp [add_assoc]) + (seqLeft_eq := fun _ _ => by ext <;> simp) + (bind_pure_comp := fun _ _ => by ext <;> simp) + +/-- Creates a `TimeM` computation with a time cost. -/ +def tick (c : T) : TimeM T PUnit := ⟨.unit, c⟩ + +@[simp, grind =] theorem ret_tick (c : T) : (tick c).ret = () := rfl +@[simp, grind =] theorem time_tick (c : T) : (tick c).time = c := rfl + +/-- `✓[c] x` adds `c` ticks, then executes `x`. -/ +macro "✓[" c:term "]" body:doElem : doElem => `(doElem| do TimeM.tick $c; $body:doElem) + +/-- `✓ x` is a shorthand for `✓[1] x`, which adds one tick and executes `x`. -/ +macro "✓" body:doElem : doElem => `(doElem| ✓[1] $body) + +/-- Notation for extracting the return value from a `TimeM` computation: `⟪tm⟫` -/ +scoped notation:max "⟪" tm "⟫" => (TimeM.ret tm) + +end TimeM +end Cslib.Algorithms.Lean diff --git a/Cslib/AlgorithmsTheory/Models/ListComparisonSearch.lean b/Cslib/AlgorithmsTheory/Models/ListComparisonSearch.lean new file mode 100644 index 000000000..888f223bc --- /dev/null +++ b/Cslib/AlgorithmsTheory/Models/ListComparisonSearch.lean @@ -0,0 +1,54 @@ +/- +Copyright (c) 2025 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel +public import Mathlib + +@[expose] public section + +/-! +# Query Type for Comparison Search in Lists + +In this file we define a query type `ListSearch` for comparison based searching in Lists, +whose sole query `compare` compares the head of the list with a given argument. It +further defines a model `ListSearch.natCost` for this query. + +-- +## Definitions + +- `ListSearch`: A query type for comparison based search in lists. +- `ListSearch.natCost`: A model for this query with costs in `ℕ`. + +-/ + +namespace Cslib + +namespace Algorithms + +open Prog + +/-- +A query type for searching elements in list. It supports exactly one query +`compare l val` which returns `true` if the head of the list `l` is equal to `val` +and returns `false` otherwise. +-/ +inductive ListSearch (α : Type*) : Type → Type _ where + | compare (a : List α) (val : α) : ListSearch α Bool + + +/-- A model of the `ListSearch` query type that assigns the cost as the number of queries. -/ +@[simps] +def ListSearch.natCost [BEq α] : Model (ListSearch α) ℕ where + evalQuery + | .compare l x => some x == l.head? + cost + | .compare _ _ => 1 + +end Algorithms + +end Cslib diff --git a/Cslib/AlgorithmsTheory/Models/ListComparisonSort.lean b/Cslib/AlgorithmsTheory/Models/ListComparisonSort.lean new file mode 100644 index 000000000..5aad6b123 --- /dev/null +++ b/Cslib/AlgorithmsTheory/Models/ListComparisonSort.lean @@ -0,0 +1,146 @@ +/- +Copyright (c) 2026 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas, Eric WIeser +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel + +@[expose] public section + +/-! +# Query Type for Comparison Search in Lists + +In this file we define two query types `SortOps` which is suitable for insertion sort, and +`SortOps`for comparison based searching in Lists. We define a model `sortModel` for `SortOps` +which uses a custom cost structure `SortOpsCost`. We define a model `sortModelCmp` for `SortOpsCmp` +which defines a `ℕ` based cost structure. +-- +## Definitions + +- `SortOps`: A query type for comparison based sorting in lists which includes queries for + comparison and head-insertion into Lists. This is a suitable query for ordered insertion + and insertion sort. +- `SortOpsCmp`: A query type for comparison based sorting that only includes a comparison query. + This is more suitable for comparison based sorts for which it is only desirable to count + comparisons + +-/ +namespace Cslib + +namespace Algorithms + +open Prog + +/-- +A model for comparison sorting on lists. +-/ +inductive SortOps (α : Type) : Type → Type where + /-- `cmpLE x y` is intended to return `true` if `x ≤ y` and `false` otherwise. + The specific order relation depends on the model provided for this typ. e-/ + | cmpLE (x : α) (y : α) : SortOps α Bool + /-- `insertHead l x` is intended to return `x :: l`. -/ + | insertHead (x : α) (l : List α) : SortOps α (List α) + +open SortOps + +section SortOpsCostModel + +/-- +A cost type for counting the operations of `SortOps` with separate fields for +counting calls to `cmpLT` and `insertHead` +-/ +@[ext, grind] +structure SortOpsCost where + /-- `compares` counts the number of calls to `cmpLT` -/ + compares : ℕ + /-- `inserts` counts the number of calls to `insertHead` -/ + inserts : ℕ + +/-- Equivalence between SortOpsCost and a product type. -/ +def SortOpsCost.equivProd : SortOpsCost ≃ (ℕ × ℕ) where + toFun sortOps := (sortOps.compares, sortOps.inserts) + invFun pair := ⟨pair.1, pair.2⟩ + left_inv _ := rfl + right_inv _ := rfl + +namespace SortOpsCost + +@[simps, grind] +instance : Zero SortOpsCost := ⟨0, 0⟩ + +@[simps] +instance : LE SortOpsCost where + le soc₁ soc₂ := soc₁.compares ≤ soc₂.compares ∧ soc₁.inserts ≤ soc₂.inserts + +instance : LT SortOpsCost where + lt soc₁ soc₂ := soc₁ ≤ soc₂ ∧ ¬soc₂ ≤ soc₁ + +@[grind] +instance : PartialOrder SortOpsCost := + fast_instance% SortOpsCost.equivProd.injective.partialOrder _ .rfl .rfl + +@[simps] +instance : Add SortOpsCost where + add soc₁ soc₂ := ⟨soc₁.compares + soc₂.compares, soc₁.inserts + soc₂.inserts⟩ + +@[simps] +instance : SMul ℕ SortOpsCost where + smul n soc := ⟨n • soc.compares, n • soc.inserts⟩ + +instance : AddCommMonoid SortOpsCost := + fast_instance% + SortOpsCost.equivProd.injective.addCommMonoid _ rfl (fun _ _ => rfl) (fun _ _ => rfl) + +end SortOpsCost + +/-- +A model of `SortOps` that uses `SortOpsCost` as the cost type for operations. + +While this accepts any decidable relation `le`, most sorting algorithms are only well-behaved in the +presence of `[Std.Total le] [IsTrans _ le]`. +-/ +@[simps, grind] +def sortModel {α : Type} (le : α → α → Prop) [DecidableRel le] : Model (SortOps α) SortOpsCost where + evalQuery + | .cmpLE x y => decide (le x y) + | .insertHead x l => x :: l + cost + | .cmpLE _ _ => ⟨1,0⟩ + | .insertHead _ _ => ⟨0,1⟩ + +end SortOpsCostModel + +section NatModel + +/-- +A model for comparison sorting on lists with only the comparison operation. This +is used in mergeSort. +-/ +inductive SortOpsCmp.{u} (α : Type u) : Type → Type _ where + /-- `cmpLE x y` is intended to return `true` if `x ≤ y` and `false` otherwise. + The specific order relation depends on the model provided for this type. -/ + | cmpLE (x : α) (y : α) : SortOpsCmp α Bool + +/-- +A model of `SortOps` that uses `ℕ` as the type for the cost of operations. In this model, +both comparisons and insertions are counted in a single `ℕ` parameter. + +While this accepts any decidable relation `le`, most sorting algorithms are only well-behaved in the +presence of `[Std.Total le] [IsTrans _ le]`. +-/ +@[simps] +def sortModelNat {α : Type*} + (le : α → α → Prop) [DecidableRel le] : Model (SortOpsCmp α) ℕ where + evalQuery + | .cmpLE x y => decide (le x y) + cost + | .cmpLE _ _ => 1 + +end NatModel + +end Algorithms + +end Cslib diff --git a/Cslib/AlgorithmsTheory/QueryModel.lean b/Cslib/AlgorithmsTheory/QueryModel.lean new file mode 100644 index 000000000..319807c05 --- /dev/null +++ b/Cslib/AlgorithmsTheory/QueryModel.lean @@ -0,0 +1,150 @@ +/- +Copyright (c) 2025 Tanner Duve. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Tanner Duve, Shreyas Srinivas, Eric Wieser +-/ + +module + +public import Mathlib +public import Cslib.Foundations.Control.Monad.Free.Fold +public import Cslib.AlgorithmsTheory.Lean.TimeM + +@[expose] public section + +/- +# Query model + +This file defines a simple query language modeled as a free monad over a +parametric type of query operations. + +## Main definitions + +- `Model Q c`: A model type for a query type `Q : Type u → Type u` and cost type `c` +- `Prog Q α`: The type of programs of query type `Q` and return type `α`. + This is a free monad under the hood +- `Prog.eval`, `Prog.time`: concrete execution semantics of a `Prog Q α` for a given model of `Q` + +## How to set up an algorithm + +This model is a lightweight framework for specifying and verifying both the correctness +and complexity of algorithms in lean. To specify an algorithm, one must: +1. Define an inductive type of queries. This type must at least one index parameter + which determines the output type of the query. Additionally, it helps to have a parameter `α` + on which the index type depends. This way, any instance parameters of `α` can be used easily + for the output types. The signatures of `Model.evalQuery` and `Model.cost` are fixed. + So you can't supply instances for the index type there. +2. Define a record of the `Model Q C` structure that specifies the evaluation and time (cost) of + each query +3. Write your algorithm as a monadic program in `Prog Q α`. With sufficient type anotations + each query `q : Q` is automatically lifted into `Prog Q α`. + +## Tags +query model, free monad, time complexity, Prog +-/ + +namespace Cslib + +namespace Algorithms + +/-- +A model type for a query type `QType` and cost type `Cost`. It consists of +two fields, which respectively define the evaluation and cost of a query. +-/ +structure Model (QType : Type u → Type v) (Cost : Type w) where + /-- Evaluates a query `q : Q ι` to return a result of type `ι`. -/ + evalQuery : QType ι → ι + /-- Counts the operational cost of a query `q : Q ι` to return a result of type `Cost`. + The cost could represent any desired complexity measure, + including but not limited to time complexity. -/ + cost : QType ι → Cost + + +open Cslib.Algorithms.Lean in +/-- lift `Model.cost` to `TimeM Cost ι` -/ +abbrev Model.timeQuery + (M : Model Q Cost) (x : Q ι) : TimeM Cost ι := + TimeM.mk (M.evalQuery x) (M.cost x) + +/-- +A program is defined as a Free Monad over a Query type `Q` which operates on a base type `α` +which can determine the input and output types of a query. +-/ +abbrev Prog Q α := FreeM Q α + +/-- +The evaluation function of a program `P : Prog Q α` given a model `M : Model Q α` of `Q` +-/ +def Prog.eval + (P : Prog Q α) (M : Model Q Cost) : α := + Id.run <| P.liftM fun x => pure (M.evalQuery x) + +@[simp, grind =] +theorem Prog.eval_pure (a : α) (M : Model Q Cost) : + Prog.eval (FreeM.pure a) M = a := + rfl + +@[simp, grind =] +theorem Prog.eval_bind + (x : Prog Q α) (f : α → Prog Q β) (M : Model Q Cost) : + Prog.eval (FreeM.bind x f) M = Prog.eval (f (x.eval M)) M := by + simp [Prog.eval] + +@[simp, grind =] +theorem Prog.eval_liftBind + (x : Q α) (f : α → Prog Q β) (M : Model Q Cost) : + Prog.eval (FreeM.liftBind x f) M = Prog.eval (f <| M.evalQuery x) M := by + simp [Prog.eval] + +/-- +The cost function of a program `P : Prog Q α` given a model `M : Model Q α` of `Q`. +The most common use case of this function is to compute time-complexity, hence the name. + +In practice this is only well-behaved in the presence of `AddCommMonoid Cost`. +-/ +def Prog.time [AddZero Cost] + (P : Prog Q α) (M : Model Q Cost) : Cost := + (P.liftM M.timeQuery).time + +@[simp, grind =] +lemma Prog.time_pure [AddZero Cost] (a : α) (M : Model Q Cost) : + Prog.time (FreeM.pure a) M = 0 := by + simp [time] + +@[simp, grind =] +theorem Prog.time_liftBind [AddZero Cost] + (x : Q α) (f : α → Prog Q β) (M : Model Q Cost) : + Prog.time (FreeM.liftBind x f) M = M.cost x + Prog.time (f <| M.evalQuery x) M := by + simp [Prog.time] + +@[simp, grind =] +lemma Prog.time_bind [AddCommMonoid Cost] (M : Model Q Cost) + (op : Prog Q ι) (cont : ι → Prog Q α) : + Prog.time (op.bind cont) M = + Prog.time op M + Prog.time (cont (Prog.eval op M)) M := by + simp only [eval, time] + induction op with + | pure a => + simp + | liftBind op cont' ih => + specialize ih (M.evalQuery op) + simp_all [add_assoc] + +section Reduction + +/-- A reduction structure from query type `Q₁` to query type `Q₂`. -/ +structure Reduction (Q₁ Q₂ : Type u → Type u) where + /-- `reduce (q : Q₁ α)` is a program `P : Prog Q₂ α` that is intended to + implement `q` in the query type `Q₂` -/ + reduce : Q₁ α → Prog Q₂ α + +/-- +`Prog.reduceProg` takes a reduction structure from a query `Q₁` to `Q₂` and extends its +`reduce` function to programs on the query type `Q₁`. +-/ +abbrev Prog.reduceProg (P : Prog Q₁ α) (red : Reduction Q₁ Q₂) : Prog Q₂ α := + P.liftM red.reduce + +end Reduction + +end Cslib.Algorithms diff --git a/Cslib/Computability/Automata/DA/Buchi.lean b/Cslib/Computability/Automata/DA/Buchi.lean index f793154cf..346cfd267 100644 --- a/Cslib/Computability/Automata/DA/Buchi.lean +++ b/Cslib/Computability/Automata/DA/Buchi.lean @@ -29,6 +29,6 @@ of the language accepted by the same automaton. theorem buchi_eq_finAcc_omegaLim {da : DA State Symbol} {acc : Set State} : language (Buchi.mk da acc) = (language (FinAcc.mk da acc))↗ω := by ext xs - simp + simp +instances end Cslib.Automata.DA diff --git a/Cslib/Computability/Automata/DA/Congr.lean b/Cslib/Computability/Automata/DA/Congr.lean new file mode 100644 index 000000000..6301073dc --- /dev/null +++ b/Cslib/Computability/Automata/DA/Congr.lean @@ -0,0 +1,67 @@ +/- +Copyright (c) 2026 Ching-Tsun Chou. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Ching-Tsun Chou +-/ + +module + +public import Cslib.Computability.Automata.DA.Basic +public import Cslib.Computability.Languages.Congruences.RightCongruence + +@[expose] public section + +/-! # Deterministic automaton corresponding to a right congruence. -/ + +namespace Cslib + +open scoped FLTS RightCongruence + +variable {Symbol : Type*} + +/-- Every right congruence gives rise to a DA whose states are the equivalence classes of +the right congruence, whose start state is the empty word, and whose transition functiuon +is concatenation on the right of the input symbol. Note that the transition function is +well-defined only because `c` is a right congruence. -/ +@[scoped grind =] +def RightCongruence.toDA [c : RightCongruence Symbol] : Automata.DA (Quotient c.eq) Symbol where + tr s x := Quotient.lift (fun u ↦ ⟦ u ++ [x] ⟧) (by + intro u v h_eq + apply Quotient.sound + exact right_cov.elim [x] h_eq + ) s + start := ⟦ [] ⟧ + +namespace Automata.DA + +variable [c : RightCongruence Symbol] + +/-- After consuming a finite word `xs`, `c.toDA` reaches the state `⟦ xs ⟧` which is +the equivalence class of `xs`. -/ +@[simp, scoped grind =] +theorem congr_mtr_eq {xs : List Symbol} : + c.toDA.mtr c.toDA.start xs = ⟦ xs ⟧ := by + generalize h_rev : xs.reverse = ys + induction ys generalizing xs + case nil => grind [List.reverse_eq_nil_iff] + case cons y ys h_ind => + obtain ⟨rfl⟩ := List.reverse_eq_cons_iff.mp h_rev + specialize h_ind (xs := ys.reverse) (by grind) + grind [Quotient.lift_mk] + +namespace FinAcc + +open Acceptor RightCongruence + +/-- The language of `c.toDA` with a single accepting state `s` is exactly +the equivalence class corresponding to `s`. -/ +@[simp] +theorem congr_language_eq {a : Quotient c.eq} : language (FinAcc.mk c.toDA {a}) = eqvCls a := by + ext + grind + +end FinAcc + +end Automata.DA + +end Cslib diff --git a/Cslib/Computability/Automata/NA/BuchiInter.lean b/Cslib/Computability/Automata/NA/BuchiInter.lean index 8ac33b542..9a8d51c75 100644 --- a/Cslib/Computability/Automata/NA/BuchiInter.lean +++ b/Cslib/Computability/Automata/NA/BuchiInter.lean @@ -93,6 +93,8 @@ lemma inter_freq_comp_acc_freq_acc {xs : ωSequence Symbol} {ss : ωSequence (( apply leadsTo_cases_or (q := {⟨_, b⟩ | b = false}) <;> grind [until_frequently_leadsTo_and, univ_inter] +-- TODO: fix proof to work with backward.isDefEq.respectTransparency +set_option backward.isDefEq.respectTransparency false in /-- The language accepted by the intersection automaton is the intersection of the languages accepted by the two component automata. -/ @[simp, scoped grind =] diff --git a/Cslib/Computability/Automata/NA/Concat.lean b/Cslib/Computability/Automata/NA/Concat.lean index ed9126777..4fb1d75e7 100644 --- a/Cslib/Computability/Automata/NA/Concat.lean +++ b/Cslib/Computability/Automata/NA/Concat.lean @@ -49,7 +49,6 @@ lemma concat_run_left {xs : ωSequence Symbol} {ss : ωSequence (State1 ⊕ Stat obtain ⟨t1, h_mtr, _⟩ := h_ind (by grind) obtain ⟨t1', h_tr, _⟩ : ∃ t1', na1.Tr t1 (xs n) t1' ∧ ss (n + 1) = inl t1' := by grind [concat, hc.trans n] - use t1' grind [LTS.MTr.stepR na1.toLTS h_mtr h_tr] lemma concat_run_left_right {xs : ωSequence Symbol} {ss : ωSequence (State1 ⊕ State2)} @@ -100,13 +99,13 @@ theorem concat_run_exists {xs1 : List Symbol} {xs2 : ωSequence Symbol} {ss2 : · obtain ⟨rfl⟩ : xs1 = [] := List.eq_nil_iff_length_eq_zero.mpr h_xs1 refine ⟨ss2.map inr, by simp only [concat]; grind [Run, LTS.ωTr], by simp⟩ · obtain ⟨s0, _, _, _, h_mtr⟩ := h1 - obtain ⟨ss1, _, _, _, _⟩ := LTS.MTr.exists_states h_mtr + obtain ⟨ss1, _, _, _, _⟩ := LTS.mTr_isExecution h_mtr let ss := (ss1.map inl).take xs1.length ++ω ss2.map inr refine ⟨ss, Run.mk ?_ ?_, ?_⟩ · grind [concat, get_append_left] · have (k) (h_k : ¬ k < xs1.length) : k + 1 - xs1.length = k - xs1.length + 1 := by grind simp only [concat] - grind [Run, LTS.ωTr, get_append_right', get_append_left] + grind [Run, LTS.ωTr, get_append_right', get_append_left, LTS.IsExecution] · grind [drop_append_of_le_length] namespace Buchi diff --git a/Cslib/Computability/Automata/NA/Loop.lean b/Cslib/Computability/Automata/NA/Loop.lean index 25fdffbb8..61dcd854a 100644 --- a/Cslib/Computability/Automata/NA/Loop.lean +++ b/Cslib/Computability/Automata/NA/Loop.lean @@ -6,7 +6,7 @@ Authors: Ching-Tsun Chou module -public import Cslib.Computability.Automata.NA.Basic +public import Cslib.Computability.Automata.NA.Total public import Cslib.Foundations.Data.OmegaSequence.Temporal @[expose] public section @@ -15,7 +15,7 @@ public import Cslib.Foundations.Data.OmegaSequence.Temporal namespace Cslib.Automata.NA -open Nat List Sum ωSequence Acceptor Language +open Nat Set Sum ωSequence Acceptor Language open scoped Run LTS variable {Symbol State : Type*} @@ -53,13 +53,11 @@ lemma loop_run_left_right {xs : ωSequence Symbol} {ss : ωSequence (Unit ⊕ St obtain ⟨t, _⟩ := isRight_iff.mp <| h2 1 (by grind) (by grind) obtain ⟨s, _⟩ : ∃ s, na.Tr s (xs 0) t ∧ s ∈ na.start := by grind [FinAcc.loop, h.start, h.trans 0] - use s, t grind case succ m h_ind => obtain ⟨s, t, h_mtr, _⟩ := h_ind (by grind) (by grind) obtain ⟨t', _⟩ := isRight_iff.mp <|h2 (m + 1 + 1) (by grind) (by grind) have h_tr : na.Tr t (xs (m + 1)) t' := by grind [FinAcc.loop, h.trans (m + 1)] - use s, t' grind [LTS.MTr.stepR na.toLTS h_mtr h_tr] lemma loop_run_left_right_left {xs : ωSequence Symbol} {ss : ωSequence (Unit ⊕ State)} @@ -81,30 +79,45 @@ lemma loop_run_from_left {xs : ωSequence Symbol} {ss : ωSequence (Unit ⊕ Sta /-- A run of `na.loop` containing at least one non-initial `()` state is the concatenation of a nonempty finite run of `na` followed by a run of `na.loop`. -/ -theorem loop_run_one_iter {xs : ωSequence Symbol} {ss : ωSequence (Unit ⊕ State)} - (h : na.loop.Run xs ss) (h1 : ∃ k, 0 < k ∧ (ss k).isLeft) : - ∃ n, xs.take n ∈ language na - 1 ∧ na.loop.Run (xs.drop n) (ss.drop n) := by - let n := Nat.find h1 - have : 0 < n ∧ (ss n).isLeft := Nat.find_spec h1 - have : ∀ k, 0 < k → k < n → (ss k).isRight := by grind [Nat.find_min h1] - refine ⟨n, ⟨?_, ?_⟩, ?_⟩ +theorem loop_run_one_iter {xs : ωSequence Symbol} {ss : ωSequence (Unit ⊕ State)} {k : ℕ} + (h : na.loop.Run xs ss) (h1 : 0 < k) (h2 : (ss k).isLeft) : + ∃ n, n ≤ k ∧ xs.take n ∈ language na - 1 ∧ na.loop.Run (xs.drop n) (ss.drop n) := by + have h1' : ∃ k, 0 < k ∧ (ss k).isLeft := by grind + let n := Nat.find h1' + have : 0 < n ∧ (ss n).isLeft := Nat.find_spec h1' + have : ∀ k, 0 < k → k < n → (ss k).isRight := by grind [Nat.find_min h1'] + refine ⟨n, by grind, ⟨?_, ?_⟩, ?_⟩ · grind [loop_run_left_right_left] · have neq : (ωSequence.take n xs).length ≠ 0 := by grind - exact neq.imp (congrArg length) + exact neq.imp (congrArg List.length) · grind [loop_run_from_left] +open List in /-- For any finite word in `language na`, there is a corresponding finite run of `na.loop`. -/ theorem loop_fin_run_exists {xl : List Symbol} (h : xl ∈ language na) : ∃ sl : List (Unit ⊕ State), ∃ _ : sl.length = xl.length + 1, sl[0] = inl () ∧ sl[xl.length] = inl () ∧ - ∀ k, ∀ _ : k < xl.length, na.loop.Tr sl[k] xl[k] sl[k + 1] := by + ∀ k, (_ : k < xl.length) → na.loop.Tr sl[k] xl[k] sl[k + 1] := by obtain ⟨_, _, _, _, h_mtr⟩ := h - obtain ⟨sl, _, _, _, _⟩ := LTS.MTr.exists_states h_mtr + obtain ⟨sl, _, _, _, _⟩ := LTS.mTr_isExecution h_mtr by_cases xl.length = 0 · use [inl ()] grind · use [inl ()] ++ (sl.extract 1 xl.length).map inr ++ [inl ()] - grind [FinAcc.loop] + #adaptation_note + /-- This squeeze was required moving to nightly-2026-01-28 -/ + grind only [= length_append, = length_cons, = length_nil, = length_map, = List.length_take, + = length_drop, = min_def, = getElem_append, = getElem_cons, = List.take_zero, FinAcc.loop, + = map_nil, = getElem_map, = getElem_take, = getElem_drop] + +/-- For any finite word in `language na`, there is a corresponding multistep transition +of `na.loop`. -/ +theorem loop_fin_run_mtr {xl : List Symbol} (h : xl ∈ language na) : + na.loop.MTr (inl ()) xl (inl ()) := by + obtain ⟨sl, _, _, _, h_run⟩ := loop_fin_run_exists h + suffices ∀ k, (_ : k ≤ xl.length) → na.loop.MTr (inl ()) (xl.take k) sl[k] by grind + intro k + induction k <;> grind [LTS.MTr.stepR, List.take_add_one] /-- For any infinite sequence `xls` of nonempty finite words from `language na`, there is an infinite run of `na.loop` corresponding to `xls.flatten` in which @@ -112,30 +125,12 @@ the state `()` marks the boundaries between the finite words in `xls`. -/ theorem loop_run_exists [Inhabited Symbol] {xls : ωSequence (List Symbol)} (h : ∀ k, (xls k) ∈ language na - 1) : ∃ ss, na.loop.Run xls.flatten ss ∧ ∀ k, ss (xls.cumLen k) = inl () := by - have : Inhabited State := by - choose s0 _ using (h 0).left - exact { default := s0 } - choose sls h_sls using fun k ↦ loop_fin_run_exists <| (h k).left - let segs := ωSequence.mk fun k ↦ (sls k).take (xls k).length - have h_len : xls.cumLen = segs.cumLen := by ext k; induction k <;> grind - have h_pos (k : ℕ) : (segs k).length > 0 := by grind [eq_nil_iff_length_eq_zero] - have h_mono := cumLen_strictMono h_pos - have h_zero := cumLen_zero (ls := segs) - have h_seg0 (k : ℕ) : (segs k)[0]! = inl () := by grind - refine ⟨segs.flatten, Run.mk ?_ ?_, ?_⟩ - · simp [h_seg0, flatten_def, FinAcc.loop] - · intro n - simp only [h_len, flatten_def] - have := segment_lower_bound h_mono h_zero n - by_cases h_n : n + 1 < segs.cumLen (segment segs.cumLen n + 1) - · grind [segment_range_val h_mono (by grind) h_n] - · have h1 : segs.cumLen (segment segs.cumLen n + 1) = n + 1 := by - grind [segment_upper_bound h_mono h_zero n] - have h2 : segment segs.cumLen (n + 1) = segment segs.cumLen n + 1 := by - simp [← h1, segment_idem h_mono] - simp [h1, h2, h_seg0] - grind - · simp [h_len, flatten_def, segment_idem h_mono, h_seg0] + let ts := ωSequence.const (inl () : Unit ⊕ State) + have h_mtr (k : ℕ) : na.loop.MTr (ts k) (xls k) (ts (k + 1)) := by grind [loop_fin_run_mtr] + have h_pos (k : ℕ) : (xls k).length > 0 := by grind + obtain ⟨ss, _, _⟩ := LTS.ωTr.flatten h_mtr h_pos + use ss + grind [Run.mk, FinAcc.loop, cumLen_zero (ls := xls)] namespace Buchi @@ -148,8 +143,9 @@ theorem loop_language_eq [Inhabited Symbol] : apply le_antisymm · apply omegaPow_coind rintro xs ⟨ss, h_run, h_acc⟩ - have h1 : ∃ k > 0, (ss k).isLeft := by grind [FinAcc.loop, frequently_atTop'.mp h_acc 0] - obtain ⟨n, _⟩ := loop_run_one_iter h_run h1 + obtain ⟨k, h1, h2⟩ : ∃ k > 0, (ss k).isLeft := + by grind [FinAcc.loop, frequently_atTop'.mp h_acc 0] + obtain ⟨n, _⟩ := loop_run_one_iter h_run h1 h2 refine ⟨xs.take n, by grind, xs.drop n, ?_, by simp⟩ refine ⟨ss.drop n, by grind, ?_⟩ apply (drop_frequently_iff_frequently n).mpr @@ -160,8 +156,62 @@ theorem loop_language_eq [Inhabited Symbol] : use ss, h_run apply frequently_iff_strictMono.mpr use xls.cumLen, ?_, by grind - grind [cumLen_strictMono, eq_nil_iff_length_eq_zero] + grind [cumLen_strictMono, List.eq_nil_iff_length_eq_zero] end Buchi +namespace FinAcc + +open scoped Computability + +/-- `finLoop na` is the loop construction applied to the "totalized" version of `na`. -/ +def finLoop (na : FinAcc State Symbol) : NA (Unit ⊕ (State ⊕ Unit)) Symbol := + FinAcc.loop ⟨na.totalize, inl '' na.accept⟩ + +/-- `finLoop na` is total, assuming that `na` has at least one start state. -/ +instance [h : Nonempty na.start] : na.finLoop.Total where + total s x := match s with + | inl _ => ⟨inr (inr ()), by simpa [finLoop, loop, NA.totalize, LTS.totalize] using h⟩ + | inr _ => ⟨inr (inr ()), by grind [finLoop, loop, NA.totalize, LTS.totalize]⟩ + +/-- `finLoop na` accepts the Kleene star of the language of `na`, assuming that +the latter is nonempty. -/ +theorem loop_language_eq [Inhabited Symbol] (h : ¬ language na = 0) : + language (FinAcc.mk na.finLoop {inl ()}) = (language na)∗ := by + rw [Language.kstar_iff_mul_add] + ext xl; constructor + · rintro ⟨s, _, t, h_acc, h_mtr⟩ + by_cases h_xl : xl = [] + · grind [mem_add, mem_one] + · have : Nonempty na.start := by + obtain ⟨_, s0, _, _⟩ := nonempty_iff_ne_empty.mpr h + use s0 + obtain ⟨xs, ss, h_ωtr, rfl, rfl⟩ := LTS.Total.mTr_ωTr h_mtr + have h_run : na.finLoop.Run (xl ++ω xs) ss := by grind [Run] + obtain ⟨h1, h2⟩ : 0 < xl.length ∧ (ss xl.length).isLeft := by + simp only [mem_singleton_iff] at h_acc + grind + obtain ⟨n, h_n, _, _, h_ωtr'⟩ := loop_run_one_iter h_run h1 h2 + left; refine ⟨xl.take n, ?_, xl.drop n, ?_, ?_⟩ + · grind [totalize_language_eq, take_append_of_le_length] + · refine ⟨ss n, by grind, ss xl.length, by grind, ?_⟩ + have := LTS.ωTr_mTr h_ωtr' (show 0 ≤ xl.length - n by grind) + have : n + (xl.length - n) = xl.length := by grind + have : ((xl ++ω xs).drop n).extract 0 (xl.length - n) = xl.drop n := by + grind [extract_eq_take, drop_append_of_le_length, take_append_of_le_length] + grind [finLoop] + · exact xl.take_append_drop n + · rintro (h | h) + · obtain ⟨xl1, ⟨h_xl1, _⟩, xl2, h_xl2, rfl⟩ := h + rw [← totalize_language_eq] at h_xl1 + have := loop_fin_run_mtr h_xl1 + obtain ⟨s1, _, s2, _, _⟩ := h_xl2 + obtain ⟨rfl⟩ : s1 = inl () := by grind [finLoop, loop] + obtain ⟨rfl⟩ : s2 = inl () := by grind [finLoop, loop] + refine ⟨inl (), ?_, inl (), ?_, LTS.MTr.comp _ this ?_⟩ <;> assumption + · obtain ⟨rfl⟩ := (Language.mem_one xl).mp h + refine ⟨inl (), ?_, inl (), ?_, ?_⟩ <;> grind [finLoop, loop] + +end FinAcc + end Cslib.Automata.NA diff --git a/Cslib/Computability/Automata/NA/Pair.lean b/Cslib/Computability/Automata/NA/Pair.lean new file mode 100644 index 000000000..ce346dd35 --- /dev/null +++ b/Cslib/Computability/Automata/NA/Pair.lean @@ -0,0 +1,146 @@ +/- +Copyright (c) 2025 Ching-Tsun Chou. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Ching-Tsun Chou +-/ + +module + +public import Cslib.Computability.Languages.RegularLanguage + +@[expose] public section + +/-! # Languages determined by pairs of states +-/ + +namespace Cslib + +open Language Automata Acceptor + +variable {Symbol : Type*} {State : Type} + +/-- `LTS.pairLang s t` is the language of finite words that can take the LTS +from state `s` to state `t`. -/ +def LTS.pairLang (lts : LTS State Symbol) (s t : State) : Language Symbol := + { xs | lts.MTr s xs t } + +@[simp, scoped grind =] +theorem LTS.mem_pairLang {lts : LTS State Symbol} {s t : State} {xs : List Symbol} : + xs ∈ lts.pairLang s t ↔ lts.MTr s xs t := Iff.rfl + +/-- `LTS.pairLang s t` is a regular language if there are only finitely many states. -/ +@[simp] +theorem LTS.pairLang_regular [Finite State] {lts : LTS State Symbol} {s t : State} : + (lts.pairLang s t).IsRegular := by + rw [IsRegular.iff_nfa] + use State, inferInstance, (NA.FinAcc.mk ⟨lts, {s}⟩ {t}) + ext + simp +instances + +/-- `LTS.pairViaLang via s t` is the language of finite words that can take the LTS +from state `s` to state `t` via a state in `via`. -/ +def LTS.pairViaLang (lts : LTS State Symbol) (via : Set State) (s t : State) : Language Symbol := + ⨆ r ∈ via, lts.pairLang s r * lts.pairLang r t + +@[simp, scoped grind =] +theorem LTS.mem_pairViaLang {lts : LTS State Symbol} {via : Set State} + {s t : State} {xs : List Symbol} : xs ∈ lts.pairViaLang via s t ↔ + ∃ r ∈ via, ∃ xs1 xs2, lts.MTr s xs1 r ∧ lts.MTr r xs2 t ∧ xs1 ++ xs2 = xs := by + simp [LTS.pairViaLang, Language.mem_mul] + +/-- `LTS.pairViaLang via s t` is a regular language if there are only finitely many states. -/ +@[simp] +theorem LTS.pairViaLang_regular [Inhabited Symbol] [Finite State] {lts : LTS State Symbol} + {via : Set State} {s t : State} : (lts.pairViaLang via s t).IsRegular := by + apply IsRegular.iSup + grind [Language.IsRegular.mul, LTS.pairLang_regular] + +theorem LTS.pairLang_append {lts : LTS State Symbol} {s0 s1 s2 : State} {xs1 xs2 : List Symbol} + (h1 : xs1 ∈ lts.pairLang s0 s1) (h2 : xs2 ∈ lts.pairLang s1 s2) : + xs1 ++ xs2 ∈ lts.pairLang s0 s2 := by + grind [<= LTS.MTr.comp] + +theorem LTS.pairLang_split {lts : LTS State Symbol} {s0 s2 : State} {xs1 xs2 : List Symbol} + (h : xs1 ++ xs2 ∈ lts.pairLang s0 s2) : + ∃ s1, xs1 ∈ lts.pairLang s0 s1 ∧ xs2 ∈ lts.pairLang s1 s2 := by + obtain ⟨r, _, _⟩ := LTS.MTr.split <| LTS.mem_pairLang.mp h + use r + grind + +theorem LTS.pairViaLang_append_pairLang {lts : LTS State Symbol} + {s0 s1 s2 : State} {xs1 xs2 : List Symbol} {via : Set State} + (h1 : xs1 ∈ lts.pairViaLang via s0 s1) (h2 : xs2 ∈ lts.pairLang s1 s2) : + xs1 ++ xs2 ∈ lts.pairViaLang via s0 s2 := by + obtain ⟨r, _, _, _, _, _, rfl⟩ := LTS.mem_pairViaLang.mp h1 + apply LTS.mem_pairViaLang.mpr + use r + grind [<= LTS.MTr.comp] + +theorem LTS.pairLang_append_pairViaLang {lts : LTS State Symbol} + {s0 s1 s2 : State} {xs1 xs2 : List Symbol} {via : Set State} + (h1 : xs1 ∈ lts.pairLang s0 s1) (h2 : xs2 ∈ lts.pairViaLang via s1 s2) : + xs1 ++ xs2 ∈ lts.pairViaLang via s0 s2 := by + obtain ⟨r, _, _, _, _, _, rfl⟩ := LTS.mem_pairViaLang.mp h2 + apply LTS.mem_pairViaLang.mpr + use r + grind [<= LTS.MTr.comp] + +theorem LTS.pairViaLang_split {lts : LTS State Symbol} {s0 s2 : State} {xs1 xs2 : List Symbol} + {via : Set State} (h : xs1 ++ xs2 ∈ lts.pairViaLang via s0 s2) : + ∃ s1, xs1 ∈ lts.pairViaLang via s0 s1 ∧ xs2 ∈ lts.pairLang s1 s2 ∨ + xs1 ∈ lts.pairLang s0 s1 ∧ xs2 ∈ lts.pairViaLang via s1 s2 := by + obtain ⟨r, h_r, ys1, ys2, h_ys1, h_ys2, h_eq⟩ := LTS.mem_pairViaLang.mp h + obtain ⟨zs1, rfl, rfl⟩ | ⟨zs2, rfl, rfl⟩ := List.append_eq_append_iff.mp h_eq + · obtain ⟨s1, _, _⟩ := LTS.MTr.split h_ys2 + use s1 + grind + · obtain ⟨s1, _, _⟩ := LTS.MTr.split h_ys1 + use s1 + grind + +namespace Automata.NA.Buchi + +open Set Filter ωSequence ωLanguage ωAcceptor + +/-- The ω-language accepted by a finite-state Büchi automaton is the finite union of ω-languages +of the form `L * M^ω`, where all `L`s and `M`s are regular languages. -/ +theorem language_eq_fin_iSup_hmul_omegaPow + [Inhabited Symbol] [Finite State] (na : Buchi State Symbol) : + language na = ⨆ s ∈ na.start, ⨆ t ∈ na.accept, (na.pairLang s t) * (na.pairLang t t)^ω := by + ext xs + simp only [ωAcceptor.mem_language, ωLanguage.mem_iSup, ωLanguage.mem_hmul, LTS.mem_pairLang] + constructor + · rintro ⟨ss, h_run, h_inf⟩ + obtain ⟨t, h_acc, h_t⟩ := frequently_in_finite_type.mp h_inf + use ss 0, by grind [NA.Run], t, h_acc + obtain ⟨f, h_mono, h_f⟩ := frequently_iff_strictMono.mp h_t + refine ⟨xs.take (f 0), ?_, xs.drop (f 0), ?_, by grind⟩ + · have : na.MTr (ss 0) (xs.extract 0 (f 0)) (ss (f 0)) := by grind [LTS.ωTr_mTr, NA.Run] + grind [extract_eq_drop_take] + · simp only [omegaPow_seq_prop, LTS.mem_pairLang] + use (f · - f 0) + split_ands + · grind [Nat.base_zero_strictMono] + · simp + · intro n + have mono_f (k : ℕ) : f 0 ≤ f (n + k) := h_mono.monotone (by grind) + grind [extract_drop, mono_f 0, LTS.ωTr_mTr h_run.trans <| h_mono.monotone (?_ : n ≤ n + 1)] + · rintro ⟨s, _, t, _, yl, h_yl, zs, h_zs, rfl⟩ + obtain ⟨zls, rfl, h_zls⟩ := mem_omegaPow.mp h_zs + let ts := ωSequence.const t + have h_mtr (n : ℕ) : na.MTr (ts n) (zls n) (ts (n + 1)) := by + grind [Language.mem_sub_one, LTS.mem_pairLang] + have h_pos (n : ℕ) : (zls n).length > 0 := by + grind [Language.mem_sub_one, List.eq_nil_iff_length_eq_zero] + obtain ⟨zss, h_zss, _⟩ := LTS.ωTr.flatten h_mtr h_pos + have (n : ℕ) : zss (zls.cumLen n) = t := by grind + obtain ⟨xss, _, _, _, _⟩ := LTS.ωTr.append h_yl h_zss (by grind [cumLen_zero (ls := zls)]) + use xss, by grind [NA.Run] + apply (drop_frequently_iff_frequently yl.length).mp + apply frequently_iff_strictMono.mpr + use zls.cumLen + grind [cumLen_strictMono] + +end Automata.NA.Buchi + +end Cslib diff --git a/Cslib/Computability/Automata/NA/Sum.lean b/Cslib/Computability/Automata/NA/Sum.lean index 125464352..9fa33116d 100644 --- a/Cslib/Computability/Automata/NA/Sum.lean +++ b/Cslib/Computability/Automata/NA/Sum.lean @@ -58,6 +58,8 @@ namespace Buchi open ωAcceptor +-- TODO: fix proof to work with backward.isDefEq.respectTransparency +set_option backward.isDefEq.respectTransparency false in /-- The ω-language accepted by the Buchi sum automata is the union of the ω-languages accepted by its component automata. -/ @[simp] diff --git a/Cslib/Computability/Automata/NA/Total.lean b/Cslib/Computability/Automata/NA/Total.lean index d6207ef50..3462a82bc 100644 --- a/Cslib/Computability/Automata/NA/Total.lean +++ b/Cslib/Computability/Automata/NA/Total.lean @@ -54,7 +54,7 @@ as long as the accepting states are also lifted in the obvious way. -/ theorem totalize_language_eq {na : FinAcc State Symbol} : language (FinAcc.mk na.totalize (inl '' na.accept)) = language na := by ext xl - simp [totalize] + simp +instances [totalize] end FinAcc diff --git a/Cslib/Computability/Languages/Congruences/BuchiCongruence.lean b/Cslib/Computability/Languages/Congruences/BuchiCongruence.lean new file mode 100644 index 000000000..59c65226e --- /dev/null +++ b/Cslib/Computability/Languages/Congruences/BuchiCongruence.lean @@ -0,0 +1,75 @@ +/- +Copyright (c) 2026 Ching-Tsun Chou. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Ching-Tsun Chou +-/ + +module + +public import Cslib.Computability.Automata.NA.Pair +public import Cslib.Computability.Languages.Congruences.RightCongruence + +@[expose] public section + +/-! +# Buchi Congruence + +A special type of right congruences used by J.R. Büchi to prove the closure +of ω-regular languages under complementation. +-/ + +namespace Cslib.Automata.NA.Buchi + +open Function + +variable {Symbol : Type*} {State : Type} + +/-- Given a Buchi automaton `na`, two finite words `u` and `v` are Buchi-congruent +according to `na` iff for every pair of states `s` and `t` of `na`, both of the +following two conditions hold: +(1) `u` can move `na` from `s` to `t` iff `v` can move `na` from `s` to `t`; +(2) `u` can move `na` from `s` to `t` via an acceptingg states iff `v` can move `na` +from `s` to `t` via an acceptingg states. -/ +def BuchiCongruence (na : Buchi State Symbol) : RightCongruence Symbol where + eq.r u v := + ∀ {s t}, (u ∈ na.pairLang s t ↔ v ∈ na.pairLang s t) ∧ + (u ∈ na.pairViaLang na.accept s t ↔ v ∈ na.pairViaLang na.accept s t) + eq.iseqv.refl := by grind + eq.iseqv.symm := by grind + eq.iseqv.trans := by grind + right_cov.elim := by + grind [Covariant, → LTS.pairLang_split, <= LTS.pairLang_append, → LTS.pairViaLang_split, + <= LTS.pairViaLang_append_pairLang, <= LTS.pairLang_append_pairViaLang] + +open scoped Classical in +/-- `BuchiCongrParam` is a parameterization of the equivalence classes of `na.BuchiCongruence` +using the type `State → State → Prop × Prop`, which is finite if `State` is. -/ +noncomputable def BuchiCongrParam (na : Buchi State Symbol) + (f : State → State → Prop × Prop) : Quotient na.BuchiCongruence.eq := + if h : ∃ u, ∀ s t, ((f s t).1 ↔ u ∈ na.pairLang s t) ∧ + ((f s t).2 ↔ u ∈ na.pairViaLang na.accept s t) + then ⟦ Classical.choose h ⟧ + else ⟦ [] ⟧ + +variable {na : Buchi State Symbol} + +/-- `BuchiCongrParam` is surjective. -/ +lemma buchiCongrParam_surjective : Surjective na.BuchiCongrParam := by + intro q + let f s t := (q.out ∈ na.pairLang s t, q.out ∈ na.pairViaLang na.accept s t) + have h : ∃ u, ∀ s t, ((f s t).1 ↔ u ∈ na.pairLang s t) ∧ + ((f s t).2 ↔ u ∈ na.pairViaLang na.accept s t) := by + use q.out + grind + use f + simp only [BuchiCongrParam, h, ↓reduceDIte] + rw [← Quotient.out_eq q] + apply Quotient.sound + intro s t + grind + +/-- `na.BuchiCongruence` is of finite index if `na` has only finitely many states. -/ +theorem buchiCongruence_fin_index [Finite State] : Finite (Quotient na.BuchiCongruence.eq) := + Finite.of_surjective na.BuchiCongrParam buchiCongrParam_surjective + +end Cslib.Automata.NA.Buchi diff --git a/Cslib/Computability/Languages/Congruences/RightCongruence.lean b/Cslib/Computability/Languages/Congruences/RightCongruence.lean new file mode 100644 index 000000000..da8d5d6dc --- /dev/null +++ b/Cslib/Computability/Languages/Congruences/RightCongruence.lean @@ -0,0 +1,41 @@ +/- +Copyright (c) 2026 Ching-Tsun Chou. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Ching-Tsun Chou +-/ + +module + +public import Cslib.Init +public import Mathlib.Computability.Language + +@[expose] public section + +/-! +# Right Congruence + +This file contains basic definitions about right congruences on finite sequences. + +NOTE: Left congruences and two-sided congruences can be similarly defined. +But they are left to future work because they are not needed for now. +-/ + +namespace Cslib + +/-- A right congruence is an equivalence relation on finite sequences (represented by lists) +that is preserved by concatenation on the right. The equivalence relation is represented +by a setoid to to enable ready access to the quotient construction. -/ +class RightCongruence (α : Type*) extends eq : Setoid (List α) where + right_cov : CovariantClass _ _ (fun x y => y ++ x) eq + +namespace RightCongruence + +variable {α : Type*} + +/-- The equivalence class (as a language) corresponding to an element of the quotient type. -/ +abbrev eqvCls [c : RightCongruence α] (a : Quotient c.eq) : Language α := + (Quotient.mk c.eq) ⁻¹' {a} + +end RightCongruence + +end Cslib diff --git a/Cslib/Computability/Languages/Language.lean b/Cslib/Computability/Languages/Language.lean index 2891329dd..e02b50aaf 100644 --- a/Cslib/Computability/Languages/Language.lean +++ b/Cslib/Computability/Languages/Language.lean @@ -54,10 +54,9 @@ theorem reverse_sub (l m : Language α) : (l - m).reverse = l.reverse - m.revers theorem sub_one_mul : (l - 1) * l = l * l - 1 := by ext x; constructor · rintro ⟨u, h_u, v, h_v, rfl⟩ - rw [mem_sub, mem_one] at h_u ⊢ constructor · refine ⟨u, ?_, v, ?_⟩ <;> grind - · grind [append_eq_nil_iff] + · grind [append_eq_nil_iff, mem_one] · rintro ⟨⟨u, h_u, v, h_v, rfl⟩, h_x⟩ rcases eq_or_ne u [] with (rfl | h_u') · refine ⟨v, ?_, [], ?_⟩ <;> grind [mem_sub, mem_one] @@ -86,4 +85,14 @@ theorem kstar_sub_one : l∗ - 1 = (l - 1) * l∗ := by exact ⟨y, h_y, z, h_z, rfl⟩ · grind [one_def, append_eq_nil_iff] +@[scoped grind =] +theorem sub_one_kstar : (l - 1)∗ = l∗ := by + ext x + grind [mem_kstar, mem_kstar_iff_exists_nonempty] + +@[scoped grind .] +theorem kstar_iff_mul_add : m = l∗ ↔ m = (l - 1) * m + 1 := by + rw [self_eq_mul_add_iff, mul_one, sub_one_kstar] + grind + end Language diff --git a/Cslib/Computability/Languages/OmegaLanguage.lean b/Cslib/Computability/Languages/OmegaLanguage.lean index 679a02972..a761e7182 100644 --- a/Cslib/Computability/Languages/OmegaLanguage.lean +++ b/Cslib/Computability/Languages/OmegaLanguage.lean @@ -322,7 +322,6 @@ theorem omegaPow_coind' [Inhabited α] (h_nn : [] ∉ l) (h_le : p ≤ l * p) : intro s h_s have h_nxt m (hm : s.drop m ∈ p) : ∃ n > m, s.extract m n ∈ l ∧ s.drop n ∈ p := by obtain ⟨k, _⟩ := hmul_seq_prop ▸ h_le hm - use m + k grind [extract_eq_drop_take] choose nxt_n nxt_p using h_nxt let f := iter_helper (fun n ↦ s.drop n ∈ p) nxt_n diff --git a/Cslib/Computability/Languages/OmegaRegularLanguage.lean b/Cslib/Computability/Languages/OmegaRegularLanguage.lean index 350728078..94f25f205 100644 --- a/Cslib/Computability/Languages/OmegaRegularLanguage.lean +++ b/Cslib/Computability/Languages/OmegaRegularLanguage.lean @@ -9,13 +9,13 @@ module public import Cslib.Computability.Automata.DA.Buchi public import Cslib.Computability.Automata.NA.BuchiEquiv public import Cslib.Computability.Automata.NA.BuchiInter -public import Cslib.Computability.Automata.NA.Concat -public import Cslib.Computability.Automata.NA.Loop +public import Cslib.Computability.Automata.NA.Pair public import Cslib.Computability.Automata.NA.Sum public import Cslib.Computability.Languages.ExampleEventuallyZero -public import Cslib.Computability.Languages.RegularLanguage +public import Cslib.Foundations.Data.Set.Saturation +public import Mathlib.Data.Finite.Card public import Mathlib.Data.Finite.Sigma -public import Mathlib.Data.Finite.Sum +public import Mathlib.Logic.Equiv.Fin.Basic @[expose] public section @@ -85,7 +85,7 @@ theorem IsRegular.bot : (⊥ : ωLanguage Symbol).IsRegular := by accept := ∅ } use Unit, inferInstance, na ext xs - simp [na] + simp +instances [na] /-- The language of all ω-sequences is ω-regular. -/ @[simp] @@ -96,11 +96,13 @@ theorem IsRegular.top : (⊤ : ωLanguage Symbol).IsRegular := by accept := univ } use Unit, inferInstance, na ext xs - simp only [na, NA.Buchi.instωAcceptor, mem_language, mem_univ, frequently_true_iff_neBot, - atTop_neBot, and_true, mem_top, iff_true] + simp +instances only [na, NA.Buchi.instωAcceptor, mem_language, mem_univ, + frequently_true_iff_neBot, atTop_neBot, and_true, mem_top, iff_true] use const () grind [NA.Run] +-- TODO: fix proof to work with backward.isDefEq.respectTransparency +set_option backward.isDefEq.respectTransparency false in /-- The union of two ω-regular languages is ω-regular. -/ @[simp] theorem IsRegular.sup {p1 p2 : ωLanguage Symbol} @@ -120,6 +122,8 @@ theorem IsRegular.sup {p1 p2 : ωLanguage Symbol} rw [mem_iUnion, Fin.exists_fin_two] grind +-- TODO: fix proof to work with backward.isDefEq.respectTransparency +set_option backward.isDefEq.respectTransparency false in open NA.Buchi in /-- The intersection of two ω-regular languages is ω-regular. -/ @[simp] @@ -185,6 +189,51 @@ theorem IsRegular.omegaPow [Inhabited Symbol] {l : Language Symbol} use Unit ⊕ State, inferInstance, ⟨na.loop, {inl ()}⟩ exact NA.Buchi.loop_language_eq +-- TODO: fix proof to work with backward.isDefEq.respectTransparency +set_option backward.isDefEq.respectTransparency false in +/-- An ω-language is regular iff it is the finite union of ω-languages of the form `L * M^ω`, +where all `L`s and `M`s are regular languages. -/ +theorem IsRegular.eq_fin_iSup_hmul_omegaPow [Inhabited Symbol] (p : ωLanguage Symbol) : + p.IsRegular ↔ ∃ n : ℕ, ∃ l m : Fin n → Language Symbol, + (∀ i, (l i).IsRegular ∧ (m i).IsRegular) ∧ p = ⨆ i, (l i) * (m i)^ω := by + constructor + · rintro ⟨State, _, na, rfl⟩ + rw [NA.Buchi.language_eq_fin_iSup_hmul_omegaPow na] + have eq_start := Finite.equivFin ↑na.start + have eq_accept := Finite.equivFin ↑na.accept + have eq_prod := eq_start.prodCongr eq_accept + have eq := (eq_prod.trans finProdFinEquiv).symm + refine ⟨Nat.card ↑na.start * Nat.card ↑na.accept, + fun i ↦ na.pairLang (eq i).1 (eq i).2, + fun i ↦ na.pairLang (eq i).2 (eq i).2, + by grind [LTS.pairLang_regular], ?_⟩ + ext xs + simp only [mem_iSup] + refine ⟨?_, by grind⟩ + rintro ⟨s, h_s, t, h_t, h_mem⟩ + use eq.invFun (⟨s, h_s⟩, ⟨t, h_t⟩) + simp [h_mem] + · rintro ⟨n, l, m, _, rfl⟩ + rw [← iSup_univ] + apply IsRegular.iSup + grind [IsRegular.hmul, IsRegular.omegaPow] + +/-- If an ω-language has a finite saturating cover made of ω-regular languages, +then it is an ω-regular language. -/ +theorem IsRegular.fin_cover_saturates {I : Type*} [Finite I] + {p : I → ωLanguage Symbol} {q : ωLanguage Symbol} + (hs : Saturates p q) (hc : ⨆ i, p i = ⊤) (hr : ∀ i, (p i).IsRegular) : q.IsRegular := by + rw [saturates_eq_biUnion hs hc] + apply IsRegular.iSup + grind + +/-- If an ω-language has a finite saturating cover made of ω-regular languages, +then its complement is an ω-regular language. -/ +theorem IsRegular.fin_cover_saturates_compl {I : Type*} [Finite I] + {p : I → ωLanguage Symbol} {q : ωLanguage Symbol} + (hs : Saturates p q) (hc : ⨆ i, p i = ⊤) (hr : ∀ i, (p i).IsRegular) : (qᶜ).IsRegular := + IsRegular.fin_cover_saturates (saturates_compl hs) hc hr + /-- McNaughton's Theorem. -/ proof_wanted IsRegular.iff_da_muller {p : ωLanguage Symbol} : p.IsRegular ↔ diff --git a/Cslib/Computability/Languages/RegularLanguage.lean b/Cslib/Computability/Languages/RegularLanguage.lean index b83097116..0ee26931a 100644 --- a/Cslib/Computability/Languages/RegularLanguage.lean +++ b/Cslib/Computability/Languages/RegularLanguage.lean @@ -6,9 +6,11 @@ Authors: Ching-Tsun Chou module +public import Cslib.Computability.Automata.DA.Congr public import Cslib.Computability.Automata.DA.Prod public import Cslib.Computability.Automata.DA.ToNA public import Cslib.Computability.Automata.NA.Concat +public import Cslib.Computability.Automata.NA.Loop public import Cslib.Computability.Automata.NA.ToDA public import Mathlib.Computability.DFA public import Mathlib.Data.Finite.Sum @@ -23,7 +25,7 @@ public import Mathlib.Tactic.Common namespace Cslib.Language -open Set List Prod Automata Acceptor +open Set List Prod Automata Acceptor RightCongruence open scoped Computability FLTS DA NA DA.FinAcc NA.FinAcc variable {Symbol : Type*} @@ -147,4 +149,26 @@ theorem IsRegular.mul [Inhabited Symbol] {l1 l2 : Language Symbol} ⟨finConcat nfa1 nfa2, inr '' (inl '' nfa2.accept)⟩ exact finConcat_language_eq +-- TODO: fix proof to work with backward.isDefEq.respectTransparency +set_option backward.isDefEq.respectTransparency false in +open NA.FinAcc Sum in +/-- The Kleene star of a regular language is regular. -/ +@[simp] +theorem IsRegular.kstar [Inhabited Symbol] {l : Language Symbol} + (h : l.IsRegular) : (l∗).IsRegular := by + by_cases h_l : l = 0 + · simp [h_l] + · rw [IsRegular.iff_nfa] at h ⊢ + obtain ⟨State, h_fin, nfa, rfl⟩ := h + use Unit ⊕ (State ⊕ Unit), inferInstance, ⟨finLoop nfa, {inl ()}⟩, loop_language_eq h_l + +/-- If a right congruence is of finite index, then each of its equivalence classes is regular. -/ +@[simp] +theorem IsRegular.congr_fin_index {Symbol : Type} + [c : RightCongruence Symbol] [Finite (Quotient c.eq)] + (a : Quotient c.eq) : (eqvCls a).IsRegular := by + rw [IsRegular.iff_dfa] + use Quotient c.eq, inferInstance, ⟨c.toDA, {a}⟩ + exact DA.FinAcc.congr_language_eq + end Cslib.Language diff --git a/Cslib/Computability/URM/Basic.lean b/Cslib/Computability/URM/Basic.lean new file mode 100644 index 000000000..04bea452e --- /dev/null +++ b/Cslib/Computability/URM/Basic.lean @@ -0,0 +1,183 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ +module + +public import Cslib.Computability.URM.Defs + +/-! # URM Basic Lemmas + +This file contains basic lemmas and helper operations for URM types. + +## Main definitions + +- `Instr.IsJump`: predicate for jump instructions +- `Instr.JumpsBoundedBy`: checks if jump targets are bounded +- `Instr.capJump`: caps jump targets to a given length + +## Main results + +- `Regs.write_read_self`, `Regs.write_read_of_ne`: register read/write lemmas +- `State.isHalted_iff`, `State.ext`: state lemmas +- `JumpsBoundedBy.mono`: bounded jumps are monotonic in the bound +- `JumpsBoundedBy.shiftJumps`: shifting preserves bounded jumps +- `Program.mem_maxRegister`: instruction maxRegister bounded by program maxRegister +-/ + +@[expose] public section + +namespace Cslib.URM + +/-! ## Register Lemmas -/ + +namespace Regs + +@[simp, scoped grind =] +theorem write_read_self (σ : Regs) (n v : ℕ) : (σ.write n v).read n = v := by + simp only [write, read, Function.update_self] + +@[simp, scoped grind =] +theorem write_read_of_ne (σ : Regs) (m n v : ℕ) (h : m ≠ n) : + (σ.write n v).read m = σ.read m := by + simp only [write, read, Function.update_of_ne h] + +end Regs + +/-! ## State Lemmas -/ + +namespace State + +@[simp] +theorem isHalted_iff (s : State) (p : Program) : s.isHalted p ↔ p.length ≤ s.pc := Iff.rfl + +/-- Extensionality for State: two states are equal iff their components are equal. -/ +@[ext] +theorem ext {s₁ s₂ : State} (hpc : s₁.pc = s₂.pc) (hregs : s₁.regs = s₂.regs) : s₁ = s₂ := by + cases s₁; cases s₂; simp only at hpc hregs; simp [hpc, hregs] + +end State + +/-! ## Instruction Lemmas -/ + +namespace Instr + +/-! ## Jump Instructions -/ + +/-- An instruction is a jump instruction. -/ +def IsJump : Instr → Prop + | J _ _ _ => True + | _ => False + +instance (instr : Instr) : Decidable instr.IsJump := by + cases instr <;> simp only [IsJump] <;> infer_instance + +/-- Z instruction is not a jump. -/ +@[simp] +theorem Z_nonJump (n : ℕ) : ¬(Z n).IsJump := not_false + +/-- S instruction is not a jump. -/ +@[simp] +theorem S_nonJump (n : ℕ) : ¬(S n).IsJump := not_false + +/-- T instruction is not a jump. -/ +@[simp] +theorem T_nonJump (m n : ℕ) : ¬(T m n).IsJump := not_false + +/-- J instruction is a jump. -/ +@[simp] +theorem J_IsJump (m n q : ℕ) : (J m n q).IsJump := trivial + +/-- shiftJumps is identity for non-jumping instructions. -/ +theorem shiftJumps_of_nonJump {instr : Instr} + (h : ¬instr.IsJump) (offset : ℕ) : instr.shiftJumps offset = instr := by + cases instr with + | Z _ | S _ | T _ _ => rfl + | J _ _ _ => exact absurd trivial h + +/-! ## Bounded Jump Targets -/ + +/-- An instruction's jump target is bounded by a given length. +Non-jump instructions trivially satisfy this. -/ +def JumpsBoundedBy (len : ℕ) : Instr → Prop + | J _ _ q => q ≤ len + | _ => True + +instance (len : ℕ) (instr : Instr) : Decidable (instr.JumpsBoundedBy len) := by + cases instr <;> simp only [JumpsBoundedBy] <;> infer_instance + +/-- Non-jumping instructions have bounded jumps for any length. -/ +theorem jumpsBoundedBy_of_nonJump {instr : Instr} (h : ¬instr.IsJump) + (len : ℕ) : instr.JumpsBoundedBy len := by + cases instr with + | J _ _ _ => exact absurd trivial h + | _ => trivial + +/-- JumpsBoundedBy is monotonic: if bounded for len1, then bounded for any len2 ≥ len1. -/ +theorem JumpsBoundedBy.mono {instr : Instr} {len1 len2 : ℕ} + (h : instr.JumpsBoundedBy len1) (hle : len1 ≤ len2) : + instr.JumpsBoundedBy len2 := by + grind [JumpsBoundedBy] + +/-- shiftJumps preserves bounded jumps with adjusted bound. -/ +theorem JumpsBoundedBy.shiftJumps {instr : Instr} {len offset : ℕ} + (h : instr.JumpsBoundedBy len) : + (instr.shiftJumps offset).JumpsBoundedBy (offset + len) := by + cases instr with + | J _ _ q => simp only [Instr.shiftJumps, JumpsBoundedBy] at h ⊢; omega + | _ => trivial + +/-! ## Jump Target Capping -/ + +/-- Cap a jump target to be at most `len`. Non-jump instructions are unchanged. -/ +def capJump (len : ℕ) : Instr → Instr + | Z n => Z n + | S n => S n + | T m n => T m n + | J m n q => J m n (min q len) + +@[simp] +theorem capJump_Z (len n : ℕ) : (Z n).capJump len = Z n := rfl + +@[simp] +theorem capJump_S (len n : ℕ) : (S n).capJump len = S n := rfl + +@[simp] +theorem capJump_T (len m n : ℕ) : (T m n).capJump len = T m n := rfl + +@[simp] +theorem capJump_J (len m n q : ℕ) : + (J m n q).capJump len = J m n (min q len) := rfl + +/-- capJump always produces an instruction with bounded jump. -/ +theorem JumpsBoundedBy.capJump (len : ℕ) (instr : Instr) : + (instr.capJump len).JumpsBoundedBy len := by + cases instr with + | J _ _ q => exact Nat.min_le_right q len + | _ => trivial + +/-- capJump is idempotent: capping twice is the same as capping once. -/ +@[simp] +theorem capJump_idempotent (len : ℕ) (instr : Instr) : + (instr.capJump len).capJump len = instr.capJump len := by + cases instr with + | Z | S | T => rfl + | J m n q => simp [capJump] + +end Instr + +namespace Program + +/-- Any instruction in a program has maxRegister at most the program's maxRegister. -/ +theorem mem_maxRegister {p : Program} {instr : Instr} (h : instr ∈ p) : + instr.maxRegister ≤ p.maxRegister := by + unfold maxRegister + rw [List.foldl_map.symm, List.foldl_eq_foldr] + exact List.le_max_of_le' 0 (List.mem_map.mpr ⟨instr, h, rfl⟩) (le_refl _) + +end Program + +end Cslib.URM + +end diff --git a/Cslib/Computability/URM/Computable.lean b/Cslib/Computability/URM/Computable.lean new file mode 100644 index 000000000..6667e7690 --- /dev/null +++ b/Cslib/Computability/URM/Computable.lean @@ -0,0 +1,45 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ +module + +public import Cslib.Computability.URM.Execution + +/-! # URM-Computable Functions + +This file defines the notion of URM-computability for partial functions on natural numbers. + +## Main definitions + +- `URM.Computes`: A program computes a given partial function. +- `URM.Computable`: A partial function is URM-computable if there exists a URM program that + computes it. + +## Implementation notes + +Inputs are provided in registers 0, 1, ..., n-1 and output is read from register 0. +-/ + +@[expose] public section + +namespace Cslib.URM + +/-- A program `p` computes a partial function `f : (Fin n → ℕ) → Part ℕ` if for any input, +`eval p inputs = f inputs` as partial values. This captures both: +- The program halts iff the function is defined on that input +- When both are defined, the program's output equals the function's value + +Note: Inputs are provided in registers 0, 1, ..., n-1 and output is read from register 0. -/ +def Computes (n : ℕ) (p : Program) (f : (Fin n → ℕ) → Part ℕ) : Prop := + ∀ inputs : Fin n → ℕ, eval p (List.ofFn inputs) = f inputs + +/-- A partial function `f : (Fin n → ℕ) → Part ℕ` is URM-computable if there exists a URM program +that computes it. -/ +def Computable (n : ℕ) (f : (Fin n → ℕ) → Part ℕ) : Prop := + ∃ p : Program, Computes n p f + +end Cslib.URM + +end diff --git a/Cslib/Computability/URM/Defs.lean b/Cslib/Computability/URM/Defs.lean new file mode 100644 index 000000000..80cec1063 --- /dev/null +++ b/Cslib/Computability/URM/Defs.lean @@ -0,0 +1,191 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ +module + +public import Cslib.Init +public import Mathlib.Data.Finset.Insert +public import Mathlib.Logic.Function.Basic +public import Mathlib.Data.List.MinMax + +/-! # URM Core Definitions + +This file contains the core definitions for Unlimited Register Machines (URMs): +instructions, register state, programs, and machine configurations. + +## Main definitions + +- `URM.Instr`: The four URM instructions (Z, S, T, J) +- `URM.Regs`: Register contents as a function `ℕ → ℕ` +- `URM.Program`: A finite sequence of instructions +- `URM.State`: Machine state (program counter + registers) + +## References + +* [N.J. Cutland, *Computability: An Introduction to Recursive Function Theory*][Cutland1980] +* [J.C. Shepherdson and H.E. Sturgis, + *Computability of Recursive Functions*][ShepherdsonSturgis1963] +-/ + +@[expose] public section + +namespace Cslib.URM + +/-! ## Instructions -/ + +/-- URM instructions. +- `Z n`: Set register n to zero +- `S n`: Increment register n by one +- `T m n`: Transfer (copy) the contents of register m to register n +- `J m n q`: If registers m and n have equal contents, jump to instruction q; + otherwise proceed to the next instruction +-/ +@[grind] +inductive Instr : Type where + | Z : ℕ → Instr + | S : ℕ → Instr + | T : ℕ → ℕ → Instr + | J : ℕ → ℕ → ℕ → Instr +deriving DecidableEq, Repr + +namespace Instr + +/-- The registers read by an instruction. -/ +@[scoped grind =] +def readsFrom : Instr → Finset ℕ + | Z _ => ∅ + | S n => {n} + | T m _ => {m} + | J m n _ => {m, n} + +/-- The register written to by an instruction, if any. -/ +@[scoped grind =] +def writesTo : Instr → Option ℕ + | Z n => some n + | S n => some n + | T _ n => some n + | J _ _ _ => none + +/-- The maximum register index referenced by an instruction. -/ +@[scoped grind =] +def maxRegister : Instr → ℕ + | Z n => n + | S n => n + | T m n => max m n + | J m n _ => max m n + +/-- Shift all jump targets in an instruction by `offset`. +Used when concatenating programs to maintain correct jump destinations. -/ +@[scoped grind =] +def shiftJumps (offset : ℕ) : Instr → Instr + | Z n => Z n + | S n => S n + | T m n => T m n + | J m n q => J m n (q + offset) + +/-- Shift all register references in an instruction by `offset`. +Used to isolate register usage when composing programs. -/ +@[scoped grind =] +def shiftRegisters (offset : ℕ) : Instr → Instr + | Z n => Z (n + offset) + | S n => S (n + offset) + | T m n => T (m + offset) (n + offset) + | J m n q => J (m + offset) (n + offset) q + +end Instr + +/-! ## Register Contents -/ + +/-- Register contents: maps register indices to natural number contents. + +Uses the functional representation `ℕ → ℕ` for efficiency with rewrites, +following the advice from the `grind` tactic documentation. -/ +abbrev Regs := ℕ → ℕ + +namespace Regs + +/-- The zero registers where all registers contain 0. -/ +@[scoped grind =] +def zero : Regs := fun _ => 0 + +/-- Read the contents of register n. -/ +@[scoped grind =] +def read (σ : Regs) (n : ℕ) : ℕ := σ n + +/-- Write value v to register n. -/ +@[scoped grind =] +def write (σ : Regs) (n : ℕ) (v : ℕ) : Regs := Function.update σ n v + +/-- Initialize registers with input values in registers 0, 1, ..., k-1. +Registers beyond the inputs are initialized to 0. -/ +@[scoped grind =] +def ofInputs (inputs : List ℕ) : Regs := fun n => inputs.getD n 0 + +/-- Extract output from register 0. -/ +@[scoped grind =] +def output (σ : Regs) : ℕ := σ 0 + +end Regs + +/-! ## Programs -/ + +/-- A URM program is a list of instructions. -/ +abbrev Program := List Instr + +namespace Program + +/-- The maximum register index referenced by any instruction in the program. -/ +@[scoped grind =] +def maxRegister (p : Program) : ℕ := + p.foldl (fun acc instr => max acc instr.maxRegister) 0 + +/-- Shift all jump targets in a program by `offset`. +Used when concatenating programs: the second program's jumps must be adjusted +by the length of the first program. -/ +@[scoped grind =] +def shiftJumps (p : Program) (offset : ℕ) : Program := + p.map (Instr.shiftJumps offset) + +/-- Shift all register references in a program by `offset`. +Used to isolate register usage when composing programs. -/ +@[scoped grind =] +def shiftRegisters (p : Program) (offset : ℕ) : Program := + p.map (Instr.shiftRegisters offset) + +end Program + +/-! ## Machine State -/ + +/-- Machine state: program counter (0-indexed) and register contents. -/ +structure State where + /-- Program counter (0-indexed). -/ + pc : ℕ + /-- Register contents. -/ + regs : Regs + +namespace State + +/-- Initial state for a program with given inputs. +The program counter starts at 0, and inputs are loaded into registers 0, 1, .... -/ +@[scoped grind =] +def init (inputs : List ℕ) : State := ⟨0, Regs.ofInputs inputs⟩ + +/-- A state is halted if the program counter is at or beyond the program length. -/ +@[scoped grind =] +def isHalted (s : State) (p : Program) : Prop := p.length ≤ s.pc + +instance (s : State) (p : Program) : Decidable (s.isHalted p) := + inferInstanceAs (Decidable (p.length ≤ s.pc)) + +instance : Inhabited State := ⟨init []⟩ + +instance : Repr State where + reprPrec s _ := s!"State(pc={s.pc})" + +end State + +end Cslib.URM + +end diff --git a/Cslib/Computability/URM/Execution.lean b/Cslib/Computability/URM/Execution.lean new file mode 100644 index 000000000..d7ed02756 --- /dev/null +++ b/Cslib/Computability/URM/Execution.lean @@ -0,0 +1,266 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ +module + +public import Cslib.Computability.URM.Basic +public import Cslib.Foundations.Data.Relation +public import Mathlib.Data.Part +public import Mathlib.Data.Setoid.Basic + +/-! # URM Execution Semantics + +Single-step and multi-step execution semantics for URMs. + +## Main definitions + +- `URM.Step`: Single-step execution relation +- `URM.Steps`: Multi-step execution (reflexive-transitive closure of `Step`) +- `URM.Halts`: A program halts on given inputs +- `URM.Diverges`: A program diverges on given inputs +- `URM.HaltsWithResult`: A program halts on given inputs with a specific result + +Bridge lemmas: +- `isHalted_iff_normal`: `s.isHalted p ↔ Relation.Normal (Step p) s` +- `halts_iff_normalizable`: `Halts p inputs ↔ Relation.Normalizable (Step p) (State.init inputs)` +- `step_confluent`: The step relation is confluent (follows from determinism) + +## Notation (scoped to `URM` namespace) + +Standard computability theory notation: +- `p ↓ inputs` — program `p` halts on inputs +- `p ↑ inputs` — program `p` diverges on inputs +- `p ↓ inputs ≫ result` — program `p` halts on inputs with result in R[0] + +## Main results + +- `Step.deterministic`: The step relation is deterministic +- `step_confluent`: The step relation is confluent (from determinism) +- `haltsWithResult_iff_eval`: `p ↓ inputs ≫ result ↔ eval p inputs = Part.some result` +-/ + +@[expose] public section + +namespace Cslib.URM + +variable (p : Program) + +/-- Single-step execution relation for URMs. + +Each constructor corresponds to one of the four instruction types: +- `zero`: Execute `Z n` (set register n to 0) +- `succ`: Execute `S n` (increment register n) +- `transfer`: Execute `T m n` (copy register m to register n) +- `jump_eq`: Execute `J m n q` when registers m and n are equal (jump to q) +- `jump_ne`: Execute `J m n q` when registers m and n differ (proceed to next) +-/ +@[grind] +inductive Step : State → State → Prop where + | zero {s : State} {n : ℕ} + (h : p[s.pc]? = some (Instr.Z n)) : + Step s ⟨s.pc + 1, s.regs.write n 0⟩ + | succ {s : State} {n : ℕ} + (h : p[s.pc]? = some (Instr.S n)) : + Step s ⟨s.pc + 1, s.regs.write n (s.regs.read n + 1)⟩ + | transfer {s : State} {m n : ℕ} + (h : p[s.pc]? = some (Instr.T m n)) : + Step s ⟨s.pc + 1, s.regs.write n (s.regs.read m)⟩ + | jump_eq {s : State} {m n q : ℕ} + (h : p[s.pc]? = some (Instr.J m n q)) + (heq : s.regs.read m = s.regs.read n) : + Step s ⟨q, s.regs⟩ + | jump_ne {s : State} {m n q : ℕ} + (h : p[s.pc]? = some (Instr.J m n q)) + (hne : s.regs.read m ≠ s.regs.read n) : + Step s ⟨s.pc + 1, s.regs⟩ + +/-- Multi-step execution: the reflexive-transitive closure of `Step`. -/ +abbrev Steps : State → State → Prop := Relation.ReflTransGen (Step p) + +namespace Step + +variable {p : Program} + +/-- The step relation is deterministic: each state has at most one successor. -/ +theorem deterministic : Relator.RightUnique (Step p) := by grind [Relator.RightUnique] + +/-- A halted state has no successor in the step relation. -/ +theorem no_step_of_halted {s s' : State} (hhalted : s.isHalted p) : ¬Step p s s' := by + grind [State.isHalted] + +/-- A single step preserves registers not written to by the current instruction. + +This is a fundamental property of URM execution: each instruction modifies at most +one register (Z, S, T write to one register; J writes to none). -/ +theorem preserves_register {s s' : State} {r : ℕ} + (hstep : Step p s s') + (hr : ∀ instr, p[s.pc]? = some instr → instr.writesTo ≠ some r) : + s'.regs.read r = s.regs.read r := by + cases hstep with + | zero hinstr | succ hinstr | transfer hinstr => + have := hr _ hinstr + simp only [Instr.writesTo, ne_eq, Option.some.injEq] at this + exact Function.update_of_ne (Ne.symm this) _ _ + | jump_eq _ _ | jump_ne _ _ => rfl + +end Step + +/-! ### Step Properties -/ + +/-- A state is halted iff it is normal (has no successor) in the reduction system. -/ +theorem isHalted_iff_normal {p : Program} {s : State} : + s.isHalted p ↔ Relation.Normal (Step p) s := by + constructor + · intro hhalted ⟨s', hstep⟩ + exact Step.no_step_of_halted hhalted hstep + · intro hnormal + -- If not halted, then p[s.pc]? = some instr for some instruction + by_contra hnothalted + simp only [State.isHalted, not_le] at hnothalted + have hlt : s.pc < p.length := hnothalted + have hinstr : p[s.pc]? = some p[s.pc] := List.getElem?_eq_getElem hlt + -- Any instruction can step, contradicting hnormal + cases hp : p[s.pc] with + | Z n => exact hnormal ⟨_, Step.zero (hp ▸ hinstr)⟩ + | S n => exact hnormal ⟨_, Step.succ (hp ▸ hinstr)⟩ + | T m n => exact hnormal ⟨_, Step.transfer (hp ▸ hinstr)⟩ + | J m n q => + by_cases heq : s.regs.read m = s.regs.read n + · exact hnormal ⟨_, Step.jump_eq (hp ▸ hinstr) heq⟩ + · exact hnormal ⟨_, Step.jump_ne (hp ▸ hinstr) heq⟩ + +/-- The step relation is confluent. -/ +theorem step_confluent (p : Program) : Relation.Confluent (Step p) := by + apply Relation.RightUnique.toConfluent + exact Step.deterministic + +namespace Steps + +variable {p : Program} + +/-- Multi-step execution preserves registers not written by any executed instruction. -/ +theorem preserves_register {s s' : State} {r : ℕ} + (hsteps : Steps p s s') + (hr : ∀ instr, instr ∈ p → instr.writesTo ≠ some r) : + s'.regs.read r = s.regs.read r := by + induction hsteps using Relation.ReflTransGen.head_induction_on with + | refl => rfl + | head => grind [Step.preserves_register] + +/-- If two halted states are reachable from the same start, they are equal. + +This follows from confluence: since `Step p` is confluent and both `s₁` and `s₂` +are normal forms reachable from `init`, they must be equal. -/ +theorem eq_of_halts {init s₁ s₂ : State} + (h1 : Steps p init s₁) (hh1 : s₁.isHalted p) + (h2 : Steps p init s₂) (hh2 : s₂.isHalted p) : s₁ = s₂ := by + -- Use confluence: both s₁ and s₂ are reachable from init, so they're joinable + have ⟨w, hw1, hw2⟩ := step_confluent p h1 h2 + -- But s₁ and s₂ are normal forms, so w must equal both + have hn1 := isHalted_iff_normal.mp hh1 + have hn2 := isHalted_iff_normal.mp hh2 + grind + +end Steps + +/-- A program halts on given inputs if execution reaches a halted state. + +This is equivalent to `(Step p).Normalizable (State.init inputs)` — +see `halts_iff_normalizable`. -/ +def Halts (inputs : List ℕ) : Prop := + ∃ s, Steps p (State.init inputs) s ∧ s.isHalted p + +/-- Halting is equivalent to normalizability in the reduction system. -/ +theorem halts_iff_normalizable {p : Program} {inputs : List ℕ} : + Halts p inputs ↔ Relation.Normalizable (Step p) (State.init inputs) := by + grind [Halts, isHalted_iff_normal] + +/-- A program diverges on given inputs if it does not halt. -/ +def Diverges (inputs : List ℕ) : Prop := ¬Halts p inputs + +/-- A program halts on given inputs with a specific result in R[0]. -/ +def HaltsWithResult (inputs : List ℕ) (result : ℕ) : Prop := + ∃ s, Steps p (State.init inputs) s ∧ s.isHalted p ∧ s.regs.output = result + +/-- Notation for halting: `p ↓ inputs` means program `p` halts on `inputs`. -/ +scoped notation:50 p " ↓ " inputs:51 => Halts p inputs +/-- Notation for divergence: `p ↑ inputs` means program `p` diverges on `inputs`. -/ +scoped notation:50 p " ↑ " inputs:51 => Diverges p inputs +/-- Notation for halting with result: `p ↓ inputs ≫ result` means program `p` halts on `inputs` +with `result` in R[0]. -/ +scoped notation:50 p " ↓ " inputs:51 " ≫ " result:51 => HaltsWithResult p inputs result + +namespace HaltsWithResult + +variable {p : Program} + +/-- If a program halts with a result, it halts. -/ +theorem toHalts {inputs : List ℕ} {result : ℕ} + (h : p ↓ inputs ≫ result) : p ↓ inputs := + let ⟨s, hsteps, hhalted, _⟩ := h + ⟨s, hsteps, hhalted⟩ + +end HaltsWithResult + +/-- Evaluation returning the full halting state. -/ +noncomputable def evalState (inputs : List ℕ) : Part State := + ⟨Halts p inputs, fun h => Classical.choose h⟩ + +/-- Specification: the state from evalState satisfies Steps and isHalted. -/ +theorem evalState_spec {inputs : List ℕ} (h : (evalState p inputs).Dom) : + let s := (evalState p inputs).get h + Steps p (State.init inputs) s ∧ s.isHalted p := + Classical.choose_spec h + +namespace Halts + +variable {p : Program} + +/-- If a program halts, it halts with the output of the final state. -/ +theorem toHaltsWithResult {inputs : List ℕ} (h : p ↓ inputs) : + p ↓ inputs ≫ ((evalState p inputs).get h).regs.output := + let s := (evalState p inputs).get h + let ⟨hsteps, hhalted⟩ := evalState_spec p h + ⟨s, hsteps, hhalted, rfl⟩ + +end Halts + +/-- Evaluation as a partial function using `Part`. +Defined when the program halts, returning the value in register 0. -/ +noncomputable def eval (inputs : List ℕ) : Part ℕ := + (evalState p inputs).map (·.regs.output) + +/-- A program halts with result `a` iff evaluation returns `Part.some a`. -/ +theorem haltsWithResult_iff_eval {inputs : List ℕ} {result : ℕ} : + p ↓ inputs ≫ result ↔ eval p inputs = Part.some result := by + rw [Part.eq_some_iff, eval, Part.mem_map_iff] + constructor + · intro ⟨s, hsteps, hhalted, hresult⟩ + have hhalts : Halts p inputs := ⟨s, hsteps, hhalted⟩ + have heq : (evalState p inputs).get hhalts = s := + Steps.eq_of_halts (evalState_spec p hhalts).1 (evalState_spec p hhalts).2 hsteps hhalted + exact ⟨(evalState p inputs).get hhalts, Part.get_mem hhalts, heq ▸ hresult⟩ + · intro ⟨s, hs_mem, hresult⟩ + rw [Part.mem_eq] at hs_mem + obtain ⟨hdom, hget⟩ := hs_mem + have hspec := evalState_spec p hdom + exact ⟨s, hget ▸ hspec.1, hget ▸ hspec.2, hresult⟩ + +/-- Two programs are equivalent if they produce the same result for all inputs. -/ +def ProgramEquiv (p q : Program) : Prop := + ∀ inputs : List ℕ, eval p inputs = eval q inputs + +/-- Program equivalence is an equivalence relation. -/ +theorem ProgramEquiv.equivalence : Equivalence ProgramEquiv where + refl := fun _ _ => rfl + symm := fun h inputs => (h inputs).symm + trans := fun h₁ h₂ inputs => (h₁ inputs).trans (h₂ inputs) + +/-- Setoid instance for programs, enabling the ≈ notation. -/ +instance : Setoid Program := Setoid.mk _ ProgramEquiv.equivalence + +end Cslib.URM + +end diff --git a/Cslib/Computability/URM/StandardForm.lean b/Cslib/Computability/URM/StandardForm.lean new file mode 100644 index 000000000..ff373adc3 --- /dev/null +++ b/Cslib/Computability/URM/StandardForm.lean @@ -0,0 +1,251 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ +module + +public import Cslib.Computability.URM.StraightLine + +/-! # Standard Form Programs + +This file defines standard-form programs (those with bounded jump targets) +and proves their execution properties. + +## Main definitions + +- `Program.IsStandardForm`: all jump targets are bounded by program length +- `Program.toStandardForm`: convert a program to standard form + +## Main results + +- `straight_line_IsStandardForm`: straight-line programs are standard form +- `Halts.toStandardForm_iff`: halting equivalence with normalized programs +-/ + +@[expose] public section + +namespace Cslib.URM + +/-! ## Standard Form Definitions -/ + +namespace Program + +/-- A program is in standard form if all jump targets are bounded by the program length. +Jumps can target any instruction (0..length-1) or the "virtual halt" position (length). -/ +def IsStandardForm (p : Program) : Prop := + ∀ instr ∈ p, instr.JumpsBoundedBy p.length + +instance (p : Program) : Decidable p.IsStandardForm := + inferInstanceAs (Decidable (∀ instr ∈ p, instr.JumpsBoundedBy p.length)) + +/-- Convert a program to standard form by capping all jump targets at the program length. -/ +def toStandardForm (p : Program) : Program := + p.map (Instr.capJump p.length) + +/-- toStandardForm preserves program length. -/ +@[simp] +theorem toStandardForm_length (p : Program) : + p.toStandardForm.length = p.length := by + simp [toStandardForm] + +/-- toStandardForm produces a standard form program. -/ +theorem toStandardForm_isStandardForm (p : Program) : + p.toStandardForm.IsStandardForm := by + unfold IsStandardForm toStandardForm + rw [List.length_map] + intro instr hinstr + obtain ⟨orig, _, rfl⟩ := List.mem_map.mp hinstr + exact Instr.JumpsBoundedBy.capJump p.length orig + +/-- Accessing an instruction in toStandardForm gives the capJump'd instruction. -/ +theorem getElem?_toStandardForm (p : Program) (i : ℕ) : + p.toStandardForm[i]? = (p[i]?).map (Instr.capJump p.length) := by + simp only [toStandardForm, List.getElem?_map] + +/-- toStandardForm is idempotent: applying it twice equals applying it once. -/ +@[simp] +theorem toStandardForm_idempotent (p : Program) : + p.toStandardForm.toStandardForm = p.toStandardForm := by + simp only [toStandardForm, List.length_map, List.map_map] + congr 1 + funext instr + exact Instr.capJump_idempotent p.length instr + +end Program + +/-! ## Standard Form Properties -/ + +/-- Straight-line programs are in standard form. -/ +theorem straight_line_IsStandardForm {p : Program} (hsl : p.IsStraightLine) : + p.IsStandardForm := by + intro instr hinstr + exact Instr.jumpsBoundedBy_of_nonJump (hsl instr hinstr) p.length + +/-! ## Bisimulation + +`p` and `p.toStandardForm` are bisimilar: each step in one program corresponds to a +step in the other that either reaches the same state or reaches a halted state with the +same registers. This bisimulation implies functional equivalence (`eval_toStandardForm`). +The key insight is that jumps with target `q > p.length` land in halted states in both +programs. -/ + +/-- Forward step correspondence: if p steps from s to s', then either: + (1) p.toStandardForm steps from s to s' (same step), or + (2) s' is halted in p, and p.toStandardForm steps to a state that is also halted + with the same registers (this only happens for jumps with unbounded targets). -/ +theorem Step.toStandardForm {p : Program} {s s' : State} (hstep : Step p s s') : + Step p.toStandardForm s s' ∨ + (s'.isHalted p ∧ ∃ s₂, Step p.toStandardForm s s₂ ∧ + s₂.isHalted p.toStandardForm ∧ s'.regs = s₂.regs) := by + cases hstep with + | zero hinstr => + left + exact Step.zero (by simp [Program.getElem?_toStandardForm, hinstr]) + | succ hinstr => + left + exact Step.succ (by simp [Program.getElem?_toStandardForm, hinstr]) + | transfer hinstr => + left + exact Step.transfer (by simp [Program.getElem?_toStandardForm, hinstr]) + | @jump_ne m n q hinstr hne => + left + have hcap : p.toStandardForm[s.pc]? = some (Instr.J m n (min q p.length)) := by + simp [Program.getElem?_toStandardForm, hinstr] + exact Step.jump_ne hcap hne + | @jump_eq m n q hinstr heq => + have (x : ℕ) (h : min q p.length = x) : p.toStandardForm[s.pc]? = some (Instr.J m n x) := by + grind [Program.getElem?_toStandardForm, Instr.capJump] + by_cases q ≤ p.length + · grind [Step.jump_eq] + · right + split_ands + · grind [State.isHalted] + · use ⟨p.length, s.regs⟩ + grind [State.isHalted, Program.toStandardForm_length] + +/-- Forward halting: if p reaches a halted state, p.toStandardForm reaches a halted state + with the same registers. -/ +theorem Steps.toStandardForm_halts {p : Program} {s s' : State} + (hsteps : Steps p s s') (hhalted : s'.isHalted p) : + ∃ s₂, Steps p.toStandardForm s s₂ ∧ + s₂.isHalted p.toStandardForm ∧ s'.regs = s₂.regs := by + induction hsteps using Relation.ReflTransGen.head_induction_on with + | refl => + refine ⟨s', .refl, ?_, rfl⟩ + simp only [State.isHalted, Program.toStandardForm_length] + exact hhalted + | head hstep hrest ih => + rcases Step.toStandardForm hstep with + hsame | ⟨hhalted_mid, s_mid, hstep_mid, hhalted_mid', hregs_eq⟩ + · obtain ⟨s₂, hsteps₂, hhalted₂, hregs_eq⟩ := ih + exact ⟨s₂, .trans (.single hsame) hsteps₂, hhalted₂, hregs_eq⟩ + · grind [Steps.eq_of_halts .refl hhalted_mid hrest hhalted] + +/-- Forward halting theorem. -/ +theorem Halts.toStandardForm {p : Program} {inputs : List ℕ} (h : Halts p inputs) : + Halts p.toStandardForm inputs := by + obtain ⟨s, hsteps, hhalted⟩ := h + obtain ⟨s₂, hsteps₂, hhalted₂, _⟩ := Steps.toStandardForm_halts hsteps hhalted + exact ⟨s₂, hsteps₂, hhalted₂⟩ + +/-- Reverse step correspondence: if p.toStandardForm steps from s to s', then either: + (1) p steps from s to s' (same step), or + (2) s' is halted in p.toStandardForm, and p steps to a state that is also halted + with the same registers (this only happens for jumps with unbounded targets). -/ +theorem Step.from_toStandardForm {p : Program} {s s' : State} + (hstep : Step p.toStandardForm s s') : + Step p s s' ∨ + (s'.isHalted p.toStandardForm ∧ ∃ s₂, Step p s s₂ ∧ + s₂.isHalted p ∧ s'.regs = s₂.regs) := by + cases hstep with + | zero hinstr | succ hinstr | transfer hinstr | jump_ne hinstr _ => + left + rw [Program.getElem?_toStandardForm] at hinstr + simp only [Option.map_eq_some_iff] at hinstr + obtain ⟨instr, hinstr', hcap⟩ := hinstr + cases instr <;> simp only [Instr.capJump] at hcap + all_goals grind + | jump_eq hinstr heq => + rw [Program.getElem?_toStandardForm] at hinstr + simp only [Option.map_eq_some_iff] at hinstr + obtain ⟨instr, hinstr', hcap⟩ := hinstr + cases instr with + | Z _ | S _ | T _ _ => simp at hcap + | J m' n' q' => + simp only [Instr.capJump, Instr.J.injEq] at hcap + obtain ⟨rfl, rfl, htarget⟩ := hcap + by_cases hbounded : q' ≤ p.length + · simp only [Nat.min_eq_left hbounded] at htarget + subst htarget + left + grind + · simp only [Nat.min_eq_right (Nat.le_of_not_le hbounded)] at htarget + subst htarget + right + refine ⟨?_, ⟨q', s.regs⟩, Step.jump_eq hinstr' heq, ?_, rfl⟩ + · grind [State.isHalted, Program.toStandardForm_length] + · grind [State.isHalted] + +/-- Reverse halting: if p.toStandardForm reaches a halted state, p reaches a halted state + with the same registers. -/ +theorem Steps.from_toStandardForm_halts {p : Program} {s s' : State} + (hsteps : Steps p.toStandardForm s s') (hhalted : s'.isHalted p.toStandardForm) : + ∃ s₂, Steps p s s₂ ∧ s₂.isHalted p ∧ s'.regs = s₂.regs := by + induction hsteps using Relation.ReflTransGen.head_induction_on with + | refl => + refine ⟨s', by rfl, ?_, rfl⟩ + simp only [State.isHalted, Program.toStandardForm_length] at hhalted ⊢ + exact hhalted + | head hstep hrest ih => + rcases Step.from_toStandardForm hstep with + hsame | ⟨hhalted_mid, s_mid, hstep_mid, hhalted_mid', hregs_eq⟩ + · obtain ⟨s₂, hsteps₂, hhalted₂, hregs_eq⟩ := ih + exact ⟨s₂, .trans (.single hsame) hsteps₂, hhalted₂, hregs_eq⟩ + · rename_i s_next + have hrest_trivial : s_next = s' := Steps.eq_of_halts .refl hhalted_mid hrest hhalted + subst hrest_trivial + exact ⟨s_mid, .single hstep_mid, hhalted_mid', hregs_eq⟩ + +/-- Reverse halting theorem. -/ +theorem Halts.of_toStandardForm {p : Program} {inputs : List ℕ} + (h : Halts p.toStandardForm inputs) : Halts p inputs := by + obtain ⟨s, hsteps, hhalted⟩ := h + obtain ⟨s₂, hsteps₂, hhalted₂, _⟩ := Steps.from_toStandardForm_halts hsteps hhalted + exact ⟨s₂, hsteps₂, hhalted₂⟩ + +/-- Halting equivalence: original halts iff standard form halts. -/ +theorem Halts.toStandardForm_iff {p : Program} {inputs : List ℕ} : + Halts p inputs ↔ Halts p.toStandardForm inputs := + ⟨Halts.toStandardForm, Halts.of_toStandardForm⟩ + +/-! ### eval Preservation -/ + +/-- Registers preservation: both reach states with the same registers. -/ +theorem evalState_toStandardForm_regs {p : Program} {inputs : List ℕ} + (hp : (evalState p inputs).Dom) (hq : (evalState p.toStandardForm inputs).Dom) : + ((evalState p inputs).get hp).regs = + ((evalState p.toStandardForm inputs).get hq).regs := by + have ⟨hsteps, hhalted⟩ := evalState_spec p hp + have ⟨hsteps', hhalted'⟩ := evalState_spec p.toStandardForm hq + obtain ⟨s₂, hsteps₂, hhalted₂, hregs_eq⟩ := Steps.toStandardForm_halts hsteps hhalted + rw [Steps.eq_of_halts hsteps' hhalted' hsteps₂ hhalted₂, hregs_eq] + +/-- eval equality: both programs produce the same partial result. -/ +theorem eval_toStandardForm {p : Program} {inputs : List ℕ} : + eval p inputs = eval p.toStandardForm inputs := by + simp only [eval] + apply Part.ext' + · simp only [Part.map_Dom] + exact Halts.toStandardForm_iff + · intro hp hq + simp only [Part.map_get, Function.comp_apply, Regs.output, + evalState_toStandardForm_regs hp hq] + +/-- A program is equivalent to its standard form. -/ +theorem toStandardForm_equiv (p : Program) : p.toStandardForm ≈ p := + fun _ => eval_toStandardForm.symm + +end Cslib.URM + +end diff --git a/Cslib/Computability/URM/StraightLine.lean b/Cslib/Computability/URM/StraightLine.lean new file mode 100644 index 000000000..b6730a794 --- /dev/null +++ b/Cslib/Computability/URM/StraightLine.lean @@ -0,0 +1,135 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ +module + +public import Cslib.Computability.URM.Execution + +/-! # Straight-Line Programs + +This file defines straight-line programs (those without jumps) and proves +they always halt exactly at their length. + +## Main definitions + +- `Program.IsStraightLine`: a program contains no jump instructions + +## Main results + +- `straight_line_halts`: straight-line programs always halt +- `straightLine_finalState`: final state after running a straight-line program +-/ + +@[expose] public section + +namespace Cslib.URM + +/-! ## Straight-Line Programs -/ + +/-- A program is "straight-line" if it contains no jump instructions. -/ +def Program.IsStraightLine (p : Program) : Prop := + ∀ i ∈ p, ¬i.IsJump + +instance (p : Program) : Decidable p.IsStraightLine := + inferInstanceAs (Decidable (∀ i ∈ p, ¬i.IsJump)) + +/-- Append preserves straight-line property. -/ +theorem Program.IsStraightLine.append {p q : Program} + (hp : p.IsStraightLine) (hq : q.IsStraightLine) : + (p ++ q).IsStraightLine := by + intro i hi + simp only [List.mem_append] at hi + rcases hi with hi | hi <;> [exact hp i hi; exact hq i hi] + +/-- Cons of non-jumping instruction preserves straight-line. -/ +theorem Program.IsStraightLine.cons {instr : Instr} {p : Program} + (hinstr : ¬instr.IsJump) (hp : p.IsStraightLine) : + Program.IsStraightLine (instr :: p) := by + intro i hi + simp only [List.mem_cons] at hi + rcases hi with rfl | hi <;> [exact hinstr; exact hp i hi] + +/-- Singleton non-jumping instruction is straight-line. -/ +theorem Program.IsStraightLine.singleton {instr : Instr} + (h : ¬instr.IsJump) : Program.IsStraightLine [instr] := by + intro i hi + simp only [List.mem_singleton] at hi + exact hi ▸ h + +/-! ## Straight-Line Program Execution -/ + +/-- A non-jumping instruction produces a step that increments PC by 1. -/ +theorem Step.of_nonJump {p : Program} {s : State} (hlt : s.pc < p.length) + (hnonjump : ¬(p[s.pc]'hlt).IsJump) : + ∃ s', Step p s s' ∧ s'.pc = s.pc + 1 := by + cases hp : (p[s.pc]'hlt) with + | Z n => + use {pc := s.pc + 1, regs := s.regs.write n 0} + grind + | S n => + use { pc := s.pc + 1, regs := s.regs.write n (s.regs.read n + 1) } + grind + | T m n => + use { pc := s.pc + 1, regs := s.regs.write n (s.regs.read m) } + grind + | J _ _ _ => grind [Instr.IsJump] + +/-- Straight-line programs halt from any starting registers, not just State.init. +Useful for chaining: after running one program, we can run the next +straight-line segment from whatever registers we're in. -/ +theorem straight_line_halts_from_regs {p : Program} (hsl : p.IsStraightLine) (r : Regs) : + ∃ s, Steps p ⟨0, r⟩ s ∧ s.isHalted p ∧ s.pc = p.length := by + suffices h : ∀ s : State, s.pc ≤ p.length → ∃ s', Steps p s s' ∧ s'.pc = p.length by + obtain ⟨s', hsteps, hpc'⟩ := h ⟨0, r⟩ (Nat.zero_le _) + exact ⟨s', hsteps, Nat.le_of_eq hpc'.symm, hpc'⟩ + intro s hpc_le + generalize hrem : p.length - s.pc = remaining + induction remaining using Nat.strong_induction_on generalizing s + by_cases hhalted : s.pc ≥ p.length + · grind + · grind [Program.IsStraightLine, Step.of_nonJump, Relation.ReflTransGen.head] + +/-- A straight-line program halts on any input. -/ +theorem straight_line_halts {p : Program} (hsl : p.IsStraightLine) (inputs : List ℕ) : + Halts p inputs := by + obtain ⟨s, hsteps, hhalted, _⟩ := straight_line_halts_from_regs hsl (Regs.ofInputs inputs) + exact ⟨s, hsteps, hhalted⟩ + +/-- The halting state for a straight-line program starting from registers r. +Wraps Classical.choose to hide it from the API. -/ +noncomputable def straightLine_finalState {p : Program} + (hsl : p.IsStraightLine) (r : Regs) : State := + Classical.choose (straight_line_halts_from_regs hsl r) + +/-- Specification: the state from straightLine_finalState satisfies Steps, isHalted, +and has pc = p.length. -/ +theorem straightLine_finalState_spec {p : Program} (hsl : p.IsStraightLine) (r : Regs) : + let s := straightLine_finalState hsl r + Steps p ⟨0, r⟩ s ∧ s.isHalted p ∧ s.pc = p.length := + Classical.choose_spec (straight_line_halts_from_regs hsl r) + +/-- The final registers after running a straight-line program from given starting registers. -/ +noncomputable def straightLine_finalRegs {p : Program} (hsl : p.IsStraightLine) (r : Regs) : Regs := + (straightLine_finalState hsl r).regs + +/-- For a straight-line program, s.regs equals straightLine_finalRegs if halted from r. -/ +theorem straightLine_finalRegs_eq_of_halted {p : Program} (hsl : p.IsStraightLine) + (r : Regs) (s : State) (hsteps : Steps p ⟨0, r⟩ s) (hhalted : s.isHalted p) : + s.regs = straightLine_finalRegs hsl r := + Steps.eq_of_halts hsteps hhalted (straightLine_finalState_spec hsl r).1 + (straightLine_finalState_spec hsl r).2.1 ▸ rfl + +/-- In a straight-line program, we can characterize the state at any intermediate pc. +This gives us the state after executing instructions 0..pc-1. -/ +theorem straight_line_state_at_pc {p : Program} (hsl : p.IsStraightLine) + (r : Regs) (targetPc : ℕ) (htarget : targetPc ≤ p.length) : + ∃ s, Steps p ⟨0, r⟩ s ∧ s.pc = targetPc := by + induction targetPc with + | zero => exact ⟨⟨0, r⟩, Relation.ReflTransGen.refl, rfl⟩ + | succ n ih => grind [Step.of_nonJump, Program.IsStraightLine] + +end Cslib.URM + +end diff --git a/Cslib/Foundations/Combinatorics/InfiniteGraphRamsey.lean b/Cslib/Foundations/Combinatorics/InfiniteGraphRamsey.lean new file mode 100644 index 000000000..f778a39c7 --- /dev/null +++ b/Cslib/Foundations/Combinatorics/InfiniteGraphRamsey.lean @@ -0,0 +1,149 @@ +/- +Copyright (c) 2026 Ching-Tsun Chou. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Ching-Tsun Chou +-/ + +module + +public import Cslib.Init +public import Mathlib.Algebra.Order.Group.Nat +public import Mathlib.Data.Fintype.Pigeonhole +public import Mathlib.Data.Set.Finite.Basic +public import Mathlib.Data.Set.Lattice + +@[expose] public section + +/-! # Ramsey theorem for infinite graphs + +This result really should be in Mathlib, but currently it is not. We do expect +the Ramsey theorem for infinite hypergraphs to appear in Mathlib eventually and +this result to be derived as a corollary of the more general result. +-/ + +namespace Cslib + +open Function Set + +/-- An infinite pigeonhole principle. -/ +theorem infinite_pigeonhole_principle {X Y : Type*} [Finite Y] (f : X → Y) {s : Set X} + (h_inf : s.Infinite) : ∃ y, ∃ t, t.Infinite ∧ t ⊆ s ∧ ∀ x ∈ t, f x = y := by + have := h_inf.to_subtype + obtain ⟨y, h_inf'⟩ := Finite.exists_infinite_fiber (s.restrict f) + have h_inf_iff := Equiv.infinite_iff <| + Equiv.subtypeSubtypeEquivSubtypeInter (· ∈ s) (fun x ↦ f x = y) + simp only [coe_eq_subtype, mem_preimage, restrict_apply, mem_singleton_iff, h_inf_iff] at h_inf' + have h_inf'' := (infinite_coe_iff (s := { x | x ∈ s ∧ f x = y })).mp h_inf' + use y, {x | x ∈ s ∧ f x = y} + grind + +/-- An `InfVSet` consists of a set of vertices and a proof that the set is infinite. -/ +private structure InfVSet (Vertex : Type*) where + /-- A set of vertices. -/ + set : Set Vertex + /-- A proof that `set` is infinite. -/ + inf : set.Infinite + +/-- A `Selection` consists of an `InfVSet`, a vertex, and a color. -/ +private structure Selection (Vertex Color : Type*) where + /-- An infinite set of vertices. -/ + vs : InfVSet Vertex + /-- A vertex. -/ + v : Vertex + /-- A color. -/ + c : Color + +variable {Vertex Color : Type*} [Finite Color] (color : Finset Vertex → Color) + +open scoped Classical in +/-- A "good selection" `S` selects an infinite subset `S.vs` of an infinite vertex set `ivs` and +a distinguished vertex `S.v` in `ivs` but not in `S.vs`, and makes sure that the edges between +`S.v` and all vertices in `S.vs` have the same color `S.c`. -/ +private def GoodSelection (ivs : InfVSet Vertex) (S : Selection Vertex Color) : Prop := + S.vs.set ⊆ ivs.set ∧ S.v ∈ ivs.set \ S.vs.set ∧ ∀ u ∈ S.vs.set, color {S.v, u} = S.c + +/-- Given any infinite vertex set, a good selection from it always exists. -/ +private lemma goodSelection_exists (ivs : InfVSet Vertex) : + ∃ S : Selection Vertex Color, GoodSelection color ivs S := by + classical + obtain ⟨v, h_v⟩ := Set.Infinite.nonempty ivs.inf + let f u := color {v, u} + obtain ⟨c, vs, h_inf, h_vs, h_col⟩ := infinite_pigeonhole_principle f <| + Set.Infinite.diff ivs.inf (finite_singleton v) + simp only [subset_diff] at h_vs + let ivs' := InfVSet.mk vs h_inf + use {vs := ivs', v := v, c := c} + grind [GoodSelection] + +variable [Infinite Vertex] + +/-- Starting from the infinite set of all vertices, inductively make an infinite sequence +of good selections. -/ +private noncomputable def goodSelection_seq : ℕ → Selection Vertex Color + | 0 => Classical.choose (goodSelection_exists color (InfVSet.mk univ infinite_univ)) + | n + 1 => Classical.choose (goodSelection_exists color (goodSelection_seq n).vs) + +/-- At every step, the `goodSelection_seq` makes a good selection and there are always +infinitely many vertices remaining to be selected. -/ +private lemma goodSelection_seq_prop (n : ℕ) : + ∃ ivs : InfVSet Vertex, GoodSelection color ivs (goodSelection_seq color n) ∧ + (ivs.set = ⋂ m < n, (goodSelection_seq color m).vs.set) := by + induction n + case zero => + use (InfVSet.mk univ infinite_univ) + simp + grind [goodSelection_seq] + case succ n h_ind => + obtain ⟨_, _, h_eq⟩ := h_ind + use (goodSelection_seq color n).vs + constructor + · grind [goodSelection_seq] + · have h1 (m : ℕ) : m < n + 1 ↔ m < n ∨ m = n := by grind + simp [h1, iInter_or, iInter_inter_distrib, ← h_eq] + grind [GoodSelection] + +open scoped Classical in +/-- There exist an infinite sequence `vs` of vertex sets, an infinite sequence `v` of vertices, +and an infinite sequence `c` of colors such that each `vs n` is a subset of the intersection of +all previous `vs m`s, each `v n` belongs to the intersection of all previous `vs m`s but not +to `vs n`, and the edges between `v n` and all vertices in `vs n` has the same color `c n`. -/ +private lemma good_selections_exist : + ∃ vs : ℕ → Set Vertex, ∃ v : ℕ → Vertex, ∃ c : ℕ → Color, + ∀ n, vs n ⊆ (⋂ m < n, vs m) ∧ v n ∈ (⋂ m < n, vs m) \ (vs n) ∧ + ∀ u ∈ vs n, color {v n, u} = c n := by + use (fun k ↦ (goodSelection_seq color k).vs.set) + use (fun k ↦ (goodSelection_seq color k).v) + use (fun k ↦ (goodSelection_seq color k).c) + intro n + obtain ⟨ivs, h_ivs, h_eq⟩ := goodSelection_seq_prop color n + rw [← h_eq] + exact h_ivs + +/-- If the edges of an infinite complete graph is assigned a finite number of colors, +then there must exist a color `c` and an infinite set `s` of vertices such that the edge +beteen any two vertices of `s` is assigned the same color `c`. -/ +theorem infinite_graph_ramsey : + ∃ c : Color, ∃ s : Set Vertex, s.Infinite ∧ + ∀ e : Finset Vertex, e.card = 2 → ↑e ⊆ s → color e = c := by + classical + obtain ⟨vs, v, c, h_sel⟩ := good_selections_exist color + simp only [forall_and] at h_sel + obtain ⟨h_vs, h_v, h_c⟩ := h_sel + have : ∀ m n, m < n → v n ∈ vs m := by + intro m n h_mn + suffices h1 : (⋂ m < n, vs m) ⊆ vs m by grind + exact biInter_subset_of_mem h_mn + obtain ⟨c', s', h_s'_inf, h_s'_col⟩ : + ∃ c' : Color, ∃ s' : Set ℕ, s'.Infinite ∧ ∀ n ∈ s', c n = c' := by + obtain ⟨c', s', h_s'_inf, _, h_s'col⟩ := infinite_pigeonhole_principle c infinite_univ + use c', s' + use c', (v '' s') + have h_v_inj : Injective v := by + intro _ _ + grind + split_ands + · exact Infinite.image (injOn_of_injective h_v_inj (s := s')) h_s'_inf + · simp only [Finset.card_eq_two] + grind [Finset.pair_comm, Finset.coe_insert, Finset.coe_singleton] + +end Cslib diff --git a/Cslib/Foundations/Control/Monad/Free.lean b/Cslib/Foundations/Control/Monad/Free.lean index 9cf40c322..d27476d40 100644 --- a/Cslib/Foundations/Control/Monad/Free.lean +++ b/Cslib/Foundations/Control/Monad/Free.lean @@ -96,7 +96,7 @@ variable {F : Type u → Type v} {ι : Type u} {α : Type w} {β : Type w'} {γ instance : Pure (FreeM F) where pure := .pure -@[simp] +@[simp, grind =] theorem pure_eq_pure : (pure : α → FreeM F α) = FreeM.pure := rfl /-- Bind operation for the `FreeM` monad. -/ @@ -115,7 +115,7 @@ protected theorem bind_assoc (x : FreeM F α) (f : α → FreeM F β) (g : β instance : Bind (FreeM F) where bind := .bind -@[simp] +@[simp, grind =] theorem bind_eq_bind {α β : Type w} : Bind.bind = (FreeM.bind : FreeM F α → _ → FreeM F β) := rfl /-- Map a function over a `FreeM` monad. -/ @@ -154,14 +154,21 @@ lemma map_lift (f : ι → α) (op : F ι) : map f (lift op : FreeM F ι) = liftBind op (fun z => (.pure (f z) : FreeM F α)) := rfl /-- `.pure a` followed by `bind` collapses immediately. -/ -@[simp] +@[simp, grind =] lemma pure_bind (a : α) (f : α → FreeM F β) : (.pure a : FreeM F α).bind f = f a := rfl -@[simp] +@[simp, grind =] +lemma pure_bind' {α β} (a : α) (f : α → FreeM F β) : (.pure a : FreeM F α) >>= f = f a := + pure_bind a f + +@[simp, grind =] lemma bind_pure : ∀ x : FreeM F α, x.bind (.pure) = x | .pure a => rfl | liftBind op k => by simp [FreeM.bind, bind_pure] +@[simp, grind =] +lemma bind_pure' : ∀ x : FreeM F α, x >>= .pure = x := bind_pure + @[simp] lemma bind_pure_comp (f : α → β) : ∀ x : FreeM F α, x.bind (.pure ∘ f) = map f x | .pure a => rfl @@ -223,6 +230,9 @@ lemma liftM_bind [LawfulMonad m] rw [FreeM.bind, liftM_liftBind, liftM_liftBind, bind_assoc] simp_rw [ih] +instance {Q α} : CoeOut (Q α) (FreeM Q α) where + coe := FreeM.lift + /-- A predicate stating that `interp : FreeM F α → m α` is an interpreter for the effect handler `handler : ∀ {α}, F α → m α`. diff --git a/Cslib/Foundations/Control/Monad/Free/Effects.lean b/Cslib/Foundations/Control/Monad/Free/Effects.lean index b8bc13111..41b96905b 100644 --- a/Cslib/Foundations/Control/Monad/Free/Effects.lean +++ b/Cslib/Foundations/Control/Monad/Free/Effects.lean @@ -7,7 +7,9 @@ Authors: Tanner Duve module public import Cslib.Foundations.Control.Monad.Free +public import Mathlib.Algebra.Group.Hom.Defs public import Mathlib.Control.Monad.Cont +public import Mathlib.Control.Monad.Writer @[expose] public section @@ -18,14 +20,16 @@ This file defines several canonical instances on the free monad. ## Main definitions -- `FreeState`, `FreeWriter`, `FreeCont`: Specific effect monads +- `FreeState`, `FreeWriter`, `FreeCont`, `FreeReader`: Specific effect monads ## Implementation To execute or interpret these computations, we provide two approaches: -1. **Hand-written interpreters** (`FreeState.run`, `FreeWriter.run`, `FreeCont.run`) that directly +1. **Hand-written interpreters** (`FreeState.run`, `FreeWriter.run`, `FreeCont.run`, + `FreeReader.run`) that directly pattern-match on the tree structure -2. **Canonical interpreters** (`FreeState.toStateM`, `FreeWriter.toWriterT`, `FreeCont.toContT`) +2. **Canonical interpreters** (`FreeState.toStateM`, `FreeWriter.toWriterT`, `FreeCont.toContT`, + `FreeReader.toReaderM`) derived from the universal property via `liftM` We prove that these approaches are equivalent, demonstrating that the implementation aligns with @@ -58,9 +62,6 @@ abbrev FreeState (σ : Type u) := FreeM (StateF σ) namespace FreeState variable {σ : Type u} {α : Type v} -instance : Monad (FreeState σ) := inferInstance -instance : LawfulMonad (FreeState σ) := inferInstance - instance : MonadStateOf σ (FreeState σ) where get := .lift .get set newState := .liftBind (.set newState) (fun _ => .pure PUnit.unit) @@ -75,8 +76,6 @@ lemma get_def : (get : FreeState σ σ) = .lift .get := rfl @[simp] lemma set_def (s : σ) : (set s : FreeState σ PUnit) = .lift (.set s) := rfl -instance : MonadState σ (FreeState σ) := inferInstance - /-- Interpret `StateF` operations into `StateM`. -/ def stateInterp {α : Type u} : StateF σ α → StateM σ α | .get => MonadStateOf.get @@ -164,10 +163,7 @@ abbrev FreeWriter (ω : Type u) := FreeM (WriterF ω) namespace FreeWriter open WriterF -variable {ω : Type u} {α : Type v} - -instance : Monad (FreeWriter ω) := inferInstance -instance : LawfulMonad (FreeWriter ω) := inferInstance +variable {ω : Type u} {α : Type u} /-- Interpret `WriterF` operations into `WriterT`. -/ def writerInterp {α : Type u} : WriterF ω α → WriterT ω Id α @@ -284,12 +280,9 @@ abbrev FreeCont (r : Type u) := FreeM (ContF r) namespace FreeCont variable {r : Type u} {α : Type v} {β : Type w} -instance : Monad (FreeCont r) := inferInstance -instance : LawfulMonad (FreeCont r) := inferInstance - /-- Interpret `ContF r` operations into `ContT r Id`. -/ def contInterp : ContF r α → ContT r Id α - | .callCC g, k => pure (g fun a => (k a).run) + | .callCC g => g /-- Convert a `FreeCont` computation into a `ContT` computation. This is the canonical interpreter derived from `liftM`. -/ @@ -353,6 +346,81 @@ lemma run_callCC (f : MonadCont.Label α (FreeCont r) β → FreeCont r α) (k : end FreeCont +/-- Type constructor for reader operations. -/ +inductive ReaderF (σ : Type u) : Type u → Type u where + | read : ReaderF σ σ + +/-- Reader monad via the `FreeM` monad -/ +abbrev FreeReader (σ) := FreeM (ReaderF σ) + +namespace FreeReader + +variable {σ : Type u} {α : Type u} + +instance : MonadReaderOf σ (FreeReader σ) where + read := .lift .read + +@[simp] +lemma read_def : (read : FreeReader σ σ) = .lift .read := rfl + +instance : MonadReader σ (FreeReader σ) := inferInstance + +/-- Interpret `ReaderF` operations into `ReaderM`. -/ +def readInterp {α : Type u} : ReaderF σ α → ReaderM σ α + | .read => MonadReaderOf.read + +/-- Convert a `FreeReader` computation into a `ReaderM` computation. This is the canonical +interpreter derived from `liftM`. -/ +def toReaderM {α : Type u} (comp : FreeReader σ α) : ReaderM σ α := + comp.liftM readInterp + +/-- `toReaderM` is the unique interpreter extending `readInterp`. -/ +theorem toReaderM_unique {α : Type u} (g : FreeReader σ α → ReaderM σ α) + (h : Interprets readInterp g) : g = toReaderM := h.eq + +/-- Run a reader computation -/ +def run (comp : FreeReader σ α) (s₀ : σ) : α := + match comp with + | .pure a => a + | .liftBind ReaderF.read a => run (a s₀) s₀ + +/-- +The canonical interpreter `toReaderM` derived from `liftM` agrees with the hand-written +recursive interpreter `run` for `FreeReader` -/ +@[simp] +theorem run_toReaderM {α : Type u} (comp : FreeReader σ α) (s : σ) : + (toReaderM comp).run s = run comp s := by + induction comp generalizing s with + | pure a => rfl + | liftBind op cont ih => + cases op; apply ih + +@[simp] +lemma run_pure (a : α) (s₀ : σ) : + run (.pure a : FreeReader σ α) s₀ = a := rfl + +@[simp] +lemma run_read (k : σ → FreeReader σ α) (s₀ : σ) : + run (liftBind .read k) s₀ = run (k s₀) s₀ := rfl + +instance instMonadWithReaderOf : MonadWithReaderOf σ (FreeReader σ) where + withReader {α} f m := + let rec go : FreeReader σ α → FreeReader σ α + | .pure a => .pure a + | .liftBind .read cont => + .liftBind .read fun s => go (cont (f s)) + go m + +@[simp] theorem run_withReader (f : σ → σ) (m : FreeReader σ α) (s : σ) : + run (withTheReader σ f m) s = run m (f s) := by + induction m generalizing s with + | pure a => rfl + | liftBind op cont ih => + cases op + simpa [withTheReader, instMonadWithReaderOf, run] using (ih (f s) s) + +end FreeReader + end FreeM end Cslib diff --git a/Cslib/Foundations/Data/HasFresh.lean b/Cslib/Foundations/Data/HasFresh.lean index e9cb1e8e4..c49b340b7 100644 --- a/Cslib/Foundations/Data/HasFresh.lean +++ b/Cslib/Foundations/Data/HasFresh.lean @@ -132,14 +132,17 @@ end export HasFresh (fresh fresh_notMem fresh_exists) -lemma HasFresh.not_of_finite (α : Type u) [Fintype α] : IsEmpty (HasFresh α) := - ⟨fun f ↦ (f.fresh_notMem .univ).elim (Finset.mem_univ _)⟩ +/-- `HasFresh α` implies a computably infinite type. -/ +instance HasFresh.to_infinite (α : Type u) [HasFresh α] : Infinite α := by + apply Infinite.of_not_fintype + rintro ⟨elems, _⟩ + grind [fresh_notMem elems] /-- All infinite types have an associated (at least noncomputable) fresh function. -This, in conjunction with `HasFresh.not_of_finite`, characterizes `HasFresh`. -/ -noncomputable def HasFresh.of_infinite (α : Type u) [Infinite α] : HasFresh α where - fresh s := s.finite_toSet.infinite_compl.nonempty.choose - fresh_notMem s := s.finite_toSet.infinite_compl.nonempty.choose_spec +This, in conjunction with `HasFresh.to_infinite`, characterizes `HasFresh`. -/ +noncomputable instance HasFresh.of_infinite (α : Type u) [Infinite α] : HasFresh α where + fresh s := Infinite.exists_notMem_finset s |>.choose + fresh_notMem s := by grind open Finset in /-- Construct a fresh element from an embedding of `ℕ` using `Nat.find`. -/ diff --git a/Cslib/Foundations/Data/Nat/Segment.lean b/Cslib/Foundations/Data/Nat/Segment.lean index b6b7a49df..be061363b 100644 --- a/Cslib/Foundations/Data/Nat/Segment.lean +++ b/Cslib/Foundations/Data/Nat/Segment.lean @@ -188,7 +188,7 @@ private lemma base_zero_shift (f : ℕ → ℕ) : (f · - f 0) 0 = 0 := by simp -private lemma base_zero_strictMono (hm : StrictMono f) : +theorem base_zero_strictMono (hm : StrictMono f) : StrictMono (f · - f 0) := by intro m n h_m_n; simp have := hm h_m_n diff --git a/Cslib/Foundations/Data/OmegaSequence/InfOcc.lean b/Cslib/Foundations/Data/OmegaSequence/InfOcc.lean index 8dc76adf0..16cd8260d 100644 --- a/Cslib/Foundations/Data/OmegaSequence/InfOcc.lean +++ b/Cslib/Foundations/Data/OmegaSequence/InfOcc.lean @@ -7,6 +7,7 @@ Authors: Ching-Tsun Chou module public import Cslib.Foundations.Data.OmegaSequence.Defs +public import Mathlib.Data.Fintype.Pigeonhole public import Mathlib.Order.Filter.AtTopBot.Basic public import Mathlib.Order.Filter.Cofinite @@ -40,6 +41,24 @@ theorem frequently_iff_strictMono {p : ℕ → Prop} : have h_range : range f ⊆ {n | p n} := by grind grind [Infinite.mono, infinite_range_of_injective, StrictMono.injective] +/-- In a finite type, the elements of a set occurs infinitely often iff +some element in the set occurs infinitely often. -/ +theorem frequently_in_finite_type [Finite α] {s : Set α} {xs : ωSequence α} : + (∃ᶠ k in atTop, xs k ∈ s) ↔ ∃ x ∈ s, ∃ᶠ k in atTop, xs k = x := by + constructor + · intro h_inf + rw [Nat.frequently_atTop_iff_infinite] at h_inf + have : Infinite (xs ⁻¹' s) := h_inf.to_subtype + let rf := Set.restrictPreimage s xs + obtain ⟨⟨x, h_x⟩, h_inf'⟩ := Finite.exists_infinite_fiber rf + rw [← Set.infinite_range_iff (Subtype.val_injective.comp Subtype.val_injective)] at h_inf' + simp only [range, comp_apply, Subtype.exists, mem_preimage, mem_singleton_iff, + restrictPreimage_mk, Subtype.mk.injEq, ← Nat.frequently_atTop_iff_infinite, rf] at h_inf' + grind + · rintro ⟨_, _, h_inf⟩ + apply Frequently.mono h_inf + grind + end ωSequence end Cslib diff --git a/Cslib/Foundations/Data/RelatesInSteps.lean b/Cslib/Foundations/Data/RelatesInSteps.lean new file mode 100644 index 000000000..07c66d0f8 --- /dev/null +++ b/Cslib/Foundations/Data/RelatesInSteps.lean @@ -0,0 +1,222 @@ +/- +Copyright (c) 2025 Bolton Bailey. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Bolton Bailey +-/ + +module + +public import Cslib.Init +public import Mathlib.Logic.Relation + +@[expose] public section + +variable {α : Type*} {r : α → α → Prop} {a b c : α} + +/-! # Relations Across Steps + +This file defines `Relation.RelatesInSteps` (and `Relation.RelatesWithinSteps`). +These are inductively defines propositions that communicate whether a relation forms a +chain of length `n` (or at most `n`) between two elements. +-/ + +namespace Relation + +/-- +A relation `r` relates two elements of `α` in `n` steps +if there is a chain of `n` pairs `(t_i, t_{i+1})` such that `r t_i t_{i+1}` for each `i`, +starting from the first element and ending at the second. +-/ +inductive RelatesInSteps (r : α → α → Prop) : α → α → ℕ → Prop + | refl (a : α) : RelatesInSteps r a a 0 + | tail (t t' t'' : α) (n : ℕ) (h₁ : RelatesInSteps r t t' n) (h₂ : r t' t'') : + RelatesInSteps r t t'' (n + 1) + +theorem RelatesInSteps.reflTransGen (h : RelatesInSteps r a b n) : ReflTransGen r a b := by + induction h with + | refl => rfl + | tail _ _ _ _ h ih => exact .tail ih h + +theorem ReflTransGen.relatesInSteps (h : ReflTransGen r a b) : ∃ n, RelatesInSteps r a b n := by + induction h with + | refl => exact ⟨0, .refl a⟩ + | tail _ _ ih => + obtain ⟨n, _⟩ := ih + exact ⟨n + 1, by grind [RelatesInSteps]⟩ + +lemma RelatesInSteps.single {a b : α} (h : r a b) : RelatesInSteps r a b 1 := + tail a a b 0 (refl a) h + +theorem RelatesInSteps.head (t t' t'' : α) (n : ℕ) (h₁ : r t t') + (h₂ : RelatesInSteps r t' t'' n) : RelatesInSteps r t t'' (n+1) := by + induction h₂ with + | refl => + exact single h₁ + | tail _ _ n _ hcd hac => + exact tail _ _ _ (n + 1) hac hcd + +@[elab_as_elim] +theorem RelatesInSteps.head_induction_on {motive : ∀ (a : α) (n : ℕ), RelatesInSteps r a b n → Prop} + {a : α} {n : ℕ} (h : RelatesInSteps r a b n) (hrefl : motive b 0 (.refl b)) + (hhead : ∀ {a c n} (h' : r a c) (h : RelatesInSteps r c b n), + motive c n h → motive a (n + 1) (h.head a c b n h')) : + motive a n h := by + induction h with + | refl => exact hrefl + | tail t' b' m hstep hrt'b' hstep_ih => + apply hstep_ih + · exact hhead hrt'b' _ hrefl + · exact fun h1 h2 ↦ hhead h1 (.tail _ t' b' _ h2 hrt'b') + +lemma RelatesInSteps.zero {a b : α} (h : RelatesInSteps r a b 0) : a = b := by + cases h + rfl + +@[simp] +lemma RelatesInSteps.zero_iff {a b : α} : RelatesInSteps r a b 0 ↔ a = b := by + constructor + · exact RelatesInSteps.zero + · intro rfl + exact RelatesInSteps.refl a + +lemma RelatesInSteps.trans {a b c : α} {n m : ℕ} + (h₁ : RelatesInSteps r a b n) (h₂ : RelatesInSteps r b c m) : + RelatesInSteps r a c (n + m) := by + induction h₂ generalizing a n with + | refl => simp [h₁] + | tail t' t'' k hsteps hstep ih => + rw [← Nat.add_assoc] + exact .tail _ t' t'' (n + k) (ih h₁) hstep + +lemma RelatesInSteps.succ {n : ℕ} (h : RelatesInSteps r a b (n + 1)) : + ∃ t', RelatesInSteps r a t' n ∧ r t' b := by + cases h with + | tail t' _ _ hsteps hstep => exact ⟨t', hsteps, hstep⟩ + +lemma RelatesInSteps.succ_iff {a b : α} {n : ℕ} : + RelatesInSteps r a b (n + 1) ↔ ∃ t', RelatesInSteps r a t' n ∧ r t' b := by + constructor + · exact RelatesInSteps.succ + · rintro ⟨t', h_steps, h_red⟩ + exact .tail _ t' b n h_steps h_red + +lemma RelatesInSteps.succ' {a b : α} : ∀ {n : ℕ}, RelatesInSteps r a b (n + 1) → + ∃ t', r a t' ∧ RelatesInSteps r t' b n := by + intro n h + obtain ⟨t', hsteps, hstep⟩ := succ h + cases n with + | zero => + rw [zero_iff] at hsteps + subst hsteps + exact ⟨b, hstep, .refl _⟩ + | succ k' => + obtain ⟨t''', h_red''', h_steps'''⟩ := succ' hsteps + exact ⟨t''', h_red''', .tail _ _ b k' h_steps''' hstep⟩ + +lemma RelatesInSteps.succ'_iff {a b : α} {n : ℕ} : + RelatesInSteps r a b (n + 1) ↔ ∃ t', r a t' ∧ RelatesInSteps r t' b n := by + constructor + · exact succ' + · rintro ⟨t', h_red, h_steps⟩ + exact h_steps.head a t' b n h_red + +/-- +If `h : α → ℕ` increases by at most 1 on each step of `r`, +then the value of `h` at the output is at most `h` at the input plus the number of steps. +-/ +lemma RelatesInSteps.apply_le_apply_add {a b : α} (h : α → ℕ) + (h_step : ∀ a b, r a b → h b ≤ h a + 1) + (m : ℕ) (hevals : RelatesInSteps r a b m) : + h b ≤ h a + m := by + induction hevals with + | refl => simp + | tail t' t'' k _ hstep ih => + have h_step' := h_step t' t'' hstep + lia + +/-- +If `g` is a homomorphism from `r` to `r'` (i.e., it preserves the reduction relation), +then `RelatesInSteps` is preserved under `g`. +-/ +lemma RelatesInSteps.map {α α' : Type*} + {r : α → α → Prop} {r' : α' → α' → Prop} + (g : α → α') (hg : ∀ a b, r a b → r' (g a) (g b)) + {a b : α} {n : ℕ} (h : RelatesInSteps r a b n) : + RelatesInSteps r' (g a) (g b) n := by + induction h with + | refl => exact RelatesInSteps.refl (g _) + | tail t' t'' m _ hstep ih => + exact .tail (g _) (g t') (g t'') m ih (hg t' t'' hstep) + +/-- +`RelatesWithinSteps` is a variant of `RelatesInSteps` that allows for a loose bound. +It states that `a` relates to `b` in *at most* `n` steps. +-/ +def RelatesWithinSteps (r : α → α → Prop) (a b : α) (n : ℕ) : Prop := + ∃ m ≤ n, RelatesInSteps r a b m + +/-- `RelatesInSteps` implies `RelatesWithinSteps` with the same bound. -/ +lemma RelatesWithinSteps.of_relatesInSteps {a b : α} {n : ℕ} (h : RelatesInSteps r a b n) : + RelatesWithinSteps r a b n := + ⟨n, Nat.le_refl n, h⟩ + +lemma RelatesWithinSteps.refl (a : α) : RelatesWithinSteps r a a 0 := + RelatesWithinSteps.of_relatesInSteps (RelatesInSteps.refl a) + +lemma RelatesWithinSteps.single {a b : α} (h : r a b) : RelatesWithinSteps r a b 1 := + RelatesWithinSteps.of_relatesInSteps (RelatesInSteps.single h) + +lemma RelatesWithinSteps.zero {a b : α} (h : RelatesWithinSteps r a b 0) : a = b := by + obtain ⟨m, hm, hevals⟩ := h + have : m = 0 := Nat.le_zero.mp hm + subst this + exact RelatesInSteps.zero hevals + +@[simp] +lemma RelatesWithinSteps.zero_iff {a b : α} : RelatesWithinSteps r a b 0 ↔ a = b := by + constructor + · exact RelatesWithinSteps.zero + · intro h + subst h + exact RelatesWithinSteps.refl a + +/-- Transitivity of `RelatesWithinSteps` in the sum of the step bounds. -/ +@[trans] +lemma RelatesWithinSteps.trans {a b c : α} {n₁ n₂ : ℕ} + (h₁ : RelatesWithinSteps r a b n₁) (h₂ : RelatesWithinSteps r b c n₂) : + RelatesWithinSteps r a c (n₁ + n₂) := by + obtain ⟨m₁, hm₁, hevals₁⟩ := h₁ + obtain ⟨m₂, hm₂, hevals₂⟩ := h₂ + use m₁ + m₂ + constructor + · lia + · exact RelatesInSteps.trans hevals₁ hevals₂ + +lemma RelatesWithinSteps.of_le {a b : α} {n₁ n₂ : ℕ} + (h : RelatesWithinSteps r a b n₁) (hn : n₁ ≤ n₂) : + RelatesWithinSteps r a b n₂ := by + obtain ⟨m, hm, hevals⟩ := h + exact ⟨m, Nat.le_trans hm hn, hevals⟩ + +/-- If `h : α → ℕ` increases by at most 1 on each step of `r`, +then the value of `h` at the output is at most `h` at the input plus the step bound. -/ +lemma RelatesWithinSteps.apply_le_apply_add {a b : α} (h : α → ℕ) + (h_step : ∀ a b, r a b → h b ≤ h a + 1) + (n : ℕ) (hevals : RelatesWithinSteps r a b n) : + h b ≤ h a + n := by + obtain ⟨m, hm, hevals_m⟩ := hevals + have := RelatesInSteps.apply_le_apply_add h h_step m hevals_m + lia + +/-- +If `g` is a homomorphism from `r` to `r'` (i.e., it preserves the reduction relation), +then `RelatesWithinSteps` is preserved under `g`. +-/ +lemma RelatesWithinSteps.map {α α' : Type*} {r : α → α → Prop} {r' : α' → α' → Prop} + (g : α → α') (hg : ∀ a b, r a b → r' (g a) (g b)) + {a b : α} {n : ℕ} (h : RelatesWithinSteps r a b n) : + RelatesWithinSteps r' (g a) (g b) n := by + obtain ⟨m, hm, hevals⟩ := h + exact ⟨m, hm, RelatesInSteps.map g hg hevals⟩ + +end Relation diff --git a/Cslib/Foundations/Data/Relation.lean b/Cslib/Foundations/Data/Relation.lean index c6193e0b1..180f181ad 100644 --- a/Cslib/Foundations/Data/Relation.lean +++ b/Cslib/Foundations/Data/Relation.lean @@ -9,8 +9,10 @@ module public import Cslib.Init public import Mathlib.Logic.Relation public import Mathlib.Data.List.TFAE +public import Mathlib.Order.Comparable public import Mathlib.Order.WellFounded public import Mathlib.Order.BooleanAlgebra.Basic +public import Mathlib.Util.Notation3 @[expose] public section @@ -33,7 +35,7 @@ theorem WellFounded.iff_transGen : WellFounded (Relation.TransGen r) ↔ WellFou namespace Relation -attribute [scoped grind] ReflGen TransGen ReflTransGen EqvGen +attribute [scoped grind] ReflGen TransGen ReflTransGen EqvGen CompRel theorem ReflGen.to_eqvGen (h : ReflGen r a b) : EqvGen r a b := by induction h <;> grind @@ -44,7 +46,23 @@ theorem TransGen.to_eqvGen (h : TransGen r a b) : EqvGen r a b := by theorem ReflTransGen.to_eqvGen (h : ReflTransGen r a b) : EqvGen r a b := by induction h <;> grind +theorem SymmGen.to_eqvGen (h : SymmGen r a b) : EqvGen r a b := by + induction h <;> grind + attribute [scoped grind →] ReflGen.to_eqvGen TransGen.to_eqvGen ReflTransGen.to_eqvGen + SymmGen.to_eqvGen + +/-- The join of the reflexive transitive closure. This is not named in Mathlib, but see + `#loogle Relation.Join (Relation.ReflTransGen ?r)` -/ +abbrev MJoin (r : α → α → Prop) := Join (ReflTransGen r) + +theorem MJoin.refl (a : α) : MJoin r a a := by + use a + +theorem MJoin.symm : Symmetric (MJoin r) := Relation.symmetric_join + +theorem MJoin.single (h : ReflTransGen r a b) : MJoin r a b := by + use b /-- The relation `r` 'up to' the relation `s`. -/ def UpTo (r s : α → α → Prop) : α → α → Prop := Comp s (Comp r s) @@ -121,12 +139,27 @@ theorem Confluent_iff_ChurchRosser : Confluent r ↔ ChurchRosser r := theorem Confluent_iff_SemiConfluent : Confluent r ↔ SemiConfluent r := List.TFAE.out confluent_equivalents 2 1 +theorem Confluent_of_unique_end {x : α} (h : ∀ y : α, ReflTransGen r y x) : Confluent r := by + intro a b c hab hac + exact ⟨x, h b, h c⟩ + /-- An element is reducible with respect to a relation if there is a value it is related to. -/ abbrev Reducible (r : α → α → Prop) (x : α) : Prop := ∃ y, r x y /-- An element is normal if it is not reducible. -/ abbrev Normal (r : α → α → Prop) (x : α) : Prop := ¬ Reducible r x +theorem Normal_iff (r : α → α → Prop) (x : α) : Normal r x ↔ ∀ y, ¬ r x y := by + rw [Normal, not_exists] + +/-- An element is normalizable if it is related to a normal element. -/ +abbrev Normalizable (r : α → α → Prop) (x : α) : Prop := + ∃ n, ReflTransGen r x n ∧ Normal r n + +/-- A relation is normalizing when every element is normalizable. -/ +abbrev Normalizing (r : α → α → Prop) : Prop := + ∀ x, Normalizable r x + /-- A multi-step from a normal form must be reflexive. -/ @[grind =>] theorem Normal.reflTransGen_eq (h : Normal r x) (xy : ReflTransGen r x y) : x = y := by @@ -169,18 +202,70 @@ theorem Confluent.equivalence_join_reflTransGen (h : Confluent r) : abbrev Terminating (r : α → α → Prop) := WellFounded (fun a b => r b a) theorem Terminating.toTransGen (ht : Terminating r) : Terminating (TransGen r) := by - simp only [Terminating] - convert WellFounded.transGen ht using 1 - grind [transGen_swap, WellFounded.transGen] + suffices _ : (fun a b => TransGen r b a) = TransGen (Function.swap r) by grind + grind [transGen_swap] theorem Terminating.ofTransGen : Terminating (TransGen r) → Terminating r := by - simp only [Terminating] - convert @WellFounded.ofTransGen α (Function.swap r) using 2 + suffices _ : (fun a b => TransGen r b a) = TransGen (Function.swap r) by grind grind [transGen_swap] theorem Terminating.iff_transGen : Terminating (TransGen r) ↔ Terminating r := ⟨ofTransGen, toTransGen⟩ +theorem Terminating.subrelation {r' : α → α → Prop} (hr : Terminating r) (h : Subrelation r' r) : + Terminating r' := by + rw [Terminating, wellFounded_iff_isEmpty_descending_chain] at hr ⊢ + rw [isEmpty_subtype] + intro f hf + exact hr.elim ⟨f, fun n ↦ by exact h (hf n)⟩ + +theorem Terminating.isNormalizing (h : Terminating r) : Normalizing r := by + unfold Terminating at h + intro t + apply WellFounded.induction h t + intro a ih + by_cases ha : Reducible r a + · obtain ⟨b, hab⟩ := ha + obtain ⟨n, hbn, hn⟩ := ih b hab + exact ⟨n, ReflTransGen.head hab hbn, hn⟩ + · use a + +theorem Terminating.isConfluent_iff_all_unique_Normal (ht : Terminating r) : + Confluent r ↔ ∀ a : α, ∃! n : α, ReflTransGen r a n ∧ Normal r n := by + have hn : Normalizing r := ht.isNormalizing + constructor + · intro hc a + apply existsUnique_of_exists_of_unique (hn a) + rintro n₁ n₂ ⟨hr₁, hn₁⟩ ⟨hr₂, hn₂⟩ + have hj : Join (ReflTransGen r) n₁ n₂ := hc hr₁ hr₂ + obtain ⟨m, h₁, h₂⟩ := hj + rw [Normal.reflTransGen_eq hn₁ h₁, Normal.reflTransGen_eq hn₂ h₂] + · intro h a b c hab hac + obtain ⟨na, ⟨han, hnnor⟩, H⟩ := h a + use na + obtain ⟨nb, hbnb, hnb⟩ := hn b + obtain ⟨nc, hcnc, hnc⟩ := hn c + have hanb : (ReflTransGen r) a nb := ReflTransGen.trans hab hbnb + have hanc : (ReflTransGen r) a nc := ReflTransGen.trans hac hcnc + have hnanb : nb = na := H nb ⟨hanb, hnb⟩ + have hnanc : nc = na := H nc ⟨hanc, hnc⟩ + rw [hnanb] at hbnb + rw [hnanc] at hcnc + exact ⟨hbnb, hcnc⟩ + +/-- A relation is convergent when it is both confluent and terminating. -/ +abbrev Convergent (r : α → α → Prop) := Confluent r ∧ Terminating r + +theorem Convergent.isTerminating (h : Convergent r) : Terminating r := h.right + +theorem Convergent.isConfluent (h : Convergent r) : Confluent r := h.left + +theorem Convergent.isNormalizing (h : Convergent r) : Normalizing r := h.isTerminating.isNormalizing + +theorem Convergent.unique_Normal (h : Convergent r) : + ∀ a : α, ∃! n : α, ReflTransGen r a n ∧ Normal r n := + h.isTerminating.isConfluent_iff_all_unique_Normal.mp h.isConfluent + /-- A relation is locally confluent when all reductions with a common origin are multi-joinable -/ abbrev LocallyConfluent (r : α → α → Prop) := ∀ {a b c : α}, r a b → r a c → Join (ReflTransGen r) b c @@ -307,4 +392,101 @@ theorem reflTransGen_mono_closed (h₁ : Subrelation r₁ r₂) (h₂ : Subrelat ext exact ⟨ReflTransGen.mono @h₁, reflTransGen_closed @h₂⟩ +lemma ReflGen.compRel_symm : ReflGen (SymmGen r) a b → ReflGen (SymmGen r) b a +| .refl => .refl +| .single (.inl h) => .single (.inr h) +| .single (.inr h) => .single (.inl h) + +@[simp, grind =] +theorem reflTransGen_compRel : ReflTransGen (SymmGen r) = EqvGen r := by + ext a b + constructor + · intro h + induction h with + | refl => exact .refl _ + | tail hab hbc ih => + cases hbc with + | inl h => exact ih.trans _ _ _ (.rel _ _ h) + | inr h => exact ih.trans _ _ _ (.symm _ _ (.rel _ _ h)) + · intro h + induction h with + | rel _ _ ih => exact .single (.inl ih) + | refl x => exact .refl + | symm x y eq ih => + rw [symmGen_swap] + exact reflTransGen_swap.mp ih + | trans _ _ _ _ _ ih₁ ih₂ => exact ih₁.trans ih₂ + +/-- `Relator.RightUnique` corresponds to deterministic reductions, which are confluent, as all +multi-reductions with a common origin start the same (this fact is +`Relation.ReflTransGen.total_of_right_unique`.) -/ +theorem RightUnique.toConfluent (hr : Relator.RightUnique r) : Confluent r := by + intro a b c ab ac + obtain (h | h) := ReflTransGen.total_of_right_unique hr ab ac + · use c + · use b + +public meta section + +open Lean Elab Meta Command Term + +/-- + This command adds notations for relations. This should not usually be called directly, but from + the `reduction_sys` attribute. + + As an example `reduction_notation foo "β"` will add the notations "⭢β" and "↠β". + + Note that the string used will afterwards be registered as a notation. This means that if you have + also used this as a constructor name, you will need quotes to access corresponding cases, e.g. «β» + in the above example. +-/ +syntax attrKind "reduction_notation" ident (str)? : command +macro_rules + | `($kind:attrKind reduction_notation $rel $sym) => + `( + @[nolint docBlame] + $kind:attrKind notation3 t:39 " ⭢" $sym:str t':39 => $rel t t' + @[nolint docBlame] + $kind:attrKind notation3 t:39 " ↠" $sym:str t':39 => Relation.ReflTransGen $rel t t' + ) + | `($kind:attrKind reduction_notation $rel) => + `( + @[nolint docBlame] + $kind:attrKind notation3 t:39 " ⭢ " t':39 => $rel t t' + @[nolint docBlame] + $kind:attrKind notation3 t:39 " ↠ " t':39 => Relation.ReflTransGen $rel t t' + ) + + +/-- + This attribute calls the `reduction_notation` command for the annotated declaration, such as in: + + ``` + @[reduction_sys "ₙ", simp] + def PredReduction (a b : ℕ) : Prop := a = b + 1 + ``` +-/ +syntax (name := reduction_sys) "reduction_sys" (ppSpace str)? : attr + +initialize Lean.registerBuiltinAttribute { + name := `reduction_sys + descr := "Register notation for a relation and its closures." + add := fun decl stx _ => MetaM.run' do + match stx with + | `(attr | reduction_sys $sym) => + let mut sym := sym + unless sym.getString.endsWith " " do + sym := Syntax.mkStrLit (sym.getString ++ " ") + liftCommandElabM <| do + modifyScope ({ · with currNamespace := decl.getPrefix }) + elabCommand (← `(scoped reduction_notation $(mkIdent decl) $sym)) + | `(attr | reduction_sys) => + liftCommandElabM <| do + modifyScope ({ · with currNamespace := decl.getPrefix }) + elabCommand (← `(scoped reduction_notation $(mkIdent decl))) + | _ => throwError "invalid syntax for 'reduction_sys' attribute" +} + +end + end Relation diff --git a/Cslib/Foundations/Data/Set/Saturation.lean b/Cslib/Foundations/Data/Set/Saturation.lean new file mode 100644 index 000000000..45cde7525 --- /dev/null +++ b/Cslib/Foundations/Data/Set/Saturation.lean @@ -0,0 +1,48 @@ +/- +Copyright (c) 2026 Ching-Tsun Chou. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Ching-Tsun Chou +-/ + +module + +public import Cslib.Init +public import Mathlib.Data.Set.Lattice + +@[expose] public section + +/-! +# Saturation +-/ + +namespace Set + +variable {ι α : Type*} + +/-- `f : ι → Set α` saturates `s : Set α` iff `f i` is a subset of `s` +whenever `f i` and `s` has any intersection at all. -/ +def Saturates (f : ι → Set α) (s : Set α) : Prop := + ∀ i : ι, (f i ∩ s).Nonempty → f i ⊆ s + +variable {f : ι → Set α} {s : Set α} + +/-- If `f` saturates `s`, then `f` saturates its complement `sᶜ` as well. -/ +@[simp, scoped grind .] +theorem saturates_compl (hs : Saturates f s) : Saturates f sᶜ := by + rintro i ⟨_, _⟩ y _ _ + have : (f i ∩ s).Nonempty := ⟨y, by grind⟩ + grind [Saturates] + +/-- If `f` is a cover and saturates `s`, then `s` is the union of all `f i` that intersects `s`. -/ +theorem saturates_eq_biUnion (hs : Saturates f s) (hc : ⋃ i, f i = univ) : + s = ⋃ i ∈ {i | (f i ∩ s).Nonempty}, f i := by + ext x + simp only [mem_setOf_eq, mem_iUnion, exists_prop] + constructor + · intro h_x + obtain ⟨i, _⟩ := mem_iUnion.mp <| univ_subset_iff.mpr hc <| mem_univ x + use i, ⟨x, by grind⟩, by grind + · rintro ⟨i, h_i, _⟩ + grind [hs i h_i] + +end Set diff --git a/Cslib/Foundations/Lint/Basic.lean b/Cslib/Foundations/Lint/Basic.lean index 8a1fb2bce..433ce2b90 100644 --- a/Cslib/Foundations/Lint/Basic.lean +++ b/Cslib/Foundations/Lint/Basic.lean @@ -7,7 +7,7 @@ Authors: Chris Henson module public import Batteries.Tactic.Lint.Basic -public meta import Lean.Meta.GlobalInstances +public meta import Lean.Meta.Instances namespace Cslib.Lint @@ -15,13 +15,13 @@ open Lean Meta Std Batteries.Tactic.Lint /-- A linter for checking that new declarations fall under some preexisting namespace. -/ @[env_linter] -meta def topNamespace : Batteries.Tactic.Lint.Linter where +public meta def topNamespace : Batteries.Tactic.Lint.Linter where noErrorsFound := "No declarations are outside a namespace." errorsFound := "TOP LEVEL DECLARATIONS:" test declName := do if ← isAutoDecl declName then return none let env ← getEnv - if isGlobalInstance env declName then return none + if ← isInstanceReducible declName then return none let nss := env.getNamespaceSet let top := nss.fold (init := (∅ : NameSet)) fun tot n => match n.components with diff --git a/Cslib/Foundations/Semantics/FLTS/Basic.lean b/Cslib/Foundations/Semantics/FLTS/Basic.lean index c06cb59f2..17f54aa1d 100644 --- a/Cslib/Foundations/Semantics/FLTS/Basic.lean +++ b/Cslib/Foundations/Semantics/FLTS/Basic.lean @@ -45,9 +45,14 @@ from the left (instead of the right), in order to match the way lists are constr @[scoped grind =] def mtr (flts : FLTS State Label) (s : State) (μs : List Label) := μs.foldl flts.tr s -@[scoped grind =] +@[simp, scoped grind =] theorem mtr_nil_eq {flts : FLTS State Label} {s : State} : flts.mtr s [] = s := rfl +@[simp, scoped grind =] +theorem mtr_concat_eq {flts : FLTS State Label} {s : State} {μs : List Label} {μ : Label} : + flts.mtr s (μs ++ [μ]) = flts.tr (flts.mtr s μs) μ := by + grind + end FLTS end Cslib diff --git a/Cslib/Foundations/Semantics/LTS/Basic.lean b/Cslib/Foundations/Semantics/LTS/Basic.lean index 481e8bd15..a65e3924f 100644 --- a/Cslib/Foundations/Semantics/LTS/Basic.lean +++ b/Cslib/Foundations/Semantics/LTS/Basic.lean @@ -7,7 +7,7 @@ Authors: Fabrizio Montesi module public import Cslib.Init -public import Cslib.Foundations.Data.OmegaSequence.Init +public import Cslib.Foundations.Data.OmegaSequence.Flatten public import Cslib.Foundations.Semantics.FLTS.Basic public import Mathlib.Data.Set.Finite.Basic public import Mathlib.Order.ConditionallyCompleteLattice.Basic @@ -73,7 +73,7 @@ def Relation.toLTS [DecidableEq Label] (r : State → State → Prop) (μ : Labe section MultiStep -/-! ## Multistep transitions with finite traces +/-! ## Multistep transitions and executions with finite traces This section treats executions with a finite number of steps. -/ @@ -104,7 +104,6 @@ theorem LTS.MTr.single {s1 : State} {μ : Label} {s2 : State} : · apply LTS.MTr.refl /-- Any multistep transition can be extended by adding a transition. -/ -@[scoped grind <=] theorem LTS.MTr.stepR {s1 : State} {μs : List Label} {s2 : State} {μ : Label} {s3 : State} : lts.MTr s1 μs s2 → lts.Tr s2 μ s3 → lts.MTr s1 (μs ++ [μ]) s3 := by intro h1 h2 @@ -144,20 +143,96 @@ theorem LTS.MTr.nil_eq (h : lts.MTr s1 [] s2) : s1 = s2 := by cases h rfl -/-- For every multistep transition, there exists a sequence of intermediate states -which satisfies the single-step transition at every step. -/ -theorem LTS.MTr.exists_states {lts : LTS State Label} {s1 s2 : State} {μs : List Label} - (h : lts.MTr s1 μs s2) : ∃ ss : List State, ∃ _ : ss.length = μs.length + 1, - ss[0] = s1 ∧ ss[μs.length] = s2 ∧ ∀ k, ∀ _ : k < μs.length, lts.Tr ss[k] μs[k] ss[k + 1] := by +/-- A finite execution, or sequence of transitions. -/ +@[scoped grind =] +def LTS.IsExecution (lts : LTS State Label) (s1 : State) (μs : List Label) (s2 : State) + (ss : List State) : Prop := + ∃ _ : ss.length = μs.length + 1, ss[0] = s1 ∧ ss[ss.length - 1] = s2 ∧ + ∀ k, {_ : k < μs.length} → lts.Tr ss[k] μs[k] ss[k + 1] + +/-- Every execution has a start state. -/ +@[scoped grind →] +theorem LTS.isExecution_nonEmpty_states (h : lts.IsExecution s1 μs s2 ss) : + ss ≠ [] := by grind + +/-- Every state has an execution of zero steps terminating in itself. -/ +@[scoped grind ⇒] +theorem LTS.IsExecution.refl (lts : LTS State Label) (s : State) : lts.IsExecution s [] s [s] := by + grind + +/-- Equivalent of `MTr.stepL` for executions. -/ +theorem LTS.IsExecution.stepL {lts : LTS State Label} (htr : lts.Tr s1 μ s2) + (hexec : lts.IsExecution s2 μs s3 ss) : lts.IsExecution s1 (μ :: μs) s3 (s1 :: ss) := by grind + +/-- Deconstruction of executions with `List.cons`. -/ +theorem LTS.isExecution_cons_invert (h : lts.IsExecution s1 (μ :: μs) s2 (s1 :: ss)) : + lts.IsExecution (ss[0]'(by grind)) μs s2 ss := by + obtain ⟨_, _, _, h4⟩ := h + exists (by grind) + constructorm* _∧_ + · rfl + · grind + · intro k valid + specialize h4 k <;> grind + +open scoped LTS.IsExecution in +/-- A multistep transition implies the existence of an execution. -/ +@[scoped grind →] +theorem LTS.mTr_isExecution {lts : LTS State Label} {s1 : State} {μs : List Label} {s2 : State} + (h : lts.MTr s1 μs s2) : ∃ ss : List State, lts.IsExecution s1 μs s2 ss := by induction h case refl t => use [t] grind - case stepL t1 μ t2 μs t3 h_tr h_mtr h_ind => - obtain ⟨ss', _, _, _, _⟩ := h_ind - use [t1] ++ ss' + case stepL t1 μ t2 μs t3 htr hmtr ih => + obtain ⟨ss', _⟩ := ih + use t1 :: ss' grind +/-- Converts an execution into a multistep transition. -/ +@[scoped grind →] +theorem LTS.isExecution_mTr (hexec : lts.IsExecution s1 μs s2 ss) : + lts.MTr s1 μs s2 := by + induction ss generalizing s1 μs + case nil => grind + case cons s1' ss ih => + let ⟨hlen, hstart, hfinal, hexec'⟩ := hexec + have : s1' = s1 := by grind + rw [this] at hexec' hexec + cases μs + · grind + case cons μ μs => + specialize ih (s1 := ss[0]'(by grind)) (μs := μs) + apply LTS.isExecution_cons_invert at hexec + apply LTS.MTr.stepL + · have : lts.Tr s1 μ (ss[0]'(by grind)) := by grind + apply this + · grind + +/-- Correspondence of multistep transitions and executions. -/ +@[scoped grind =] +theorem LTS.mTr_isExecution_iff : lts.MTr s1 μs s2 ↔ + ∃ ss : List State, lts.IsExecution s1 μs s2 ss := by + grind + +/-- An execution can be split at any intermediate state into two executions. -/ +theorem LTS.IsExecution.split + {lts : LTS State Label} {s t : State} {μs : List Label} {ss : List State} + (he : lts.IsExecution s μs t ss) (n : ℕ) (hn : n ≤ μs.length) : + lts.IsExecution s (μs.take n) (ss[n]'(by grind)) (ss.take (n + 1)) ∧ + lts.IsExecution (ss[n]'(by grind)) (μs.drop n) t (ss.drop n) := by + have : n + (ss.length - n - 1) = ss.length - 1 := by grind + simp [IsExecution] + grind + +/-- A multistep transition over a concatenation can be split into two multistep transitions. -/ +theorem LTS.MTr.split {lts : LTS State Label} {s0 : State} {μs1 μs2 : List Label} {s2 : State} + (h : lts.MTr s0 (μs1 ++ μs2) s2) : ∃ s1, lts.MTr s0 μs1 s1 ∧ lts.MTr s1 μs2 s2 := by + obtain ⟨ss, h_ss⟩ := LTS.mTr_isExecution h + obtain ⟨_, _⟩ := LTS.IsExecution.split h_ss μs1.length (by grind) + use ss[μs1.length]'(by grind) + grind [List.take_append] + /-- A state `s1` can reach a state `s2` if there exists a multistep transition from `s1` to `s2`. -/ @[scoped grind =] @@ -264,16 +339,71 @@ theorem LTS.ωTr.cons (htr : lts.Tr s μ t) (hωtr : lts.ωTr ss μs) (hm : ss 0 induction i <;> grind /-- Prepends an infinite execution with a finite execution. -/ -theorem LTS.ωTr.append (hmtr : lts.MTr s μl t) (hωtr : lts.ωTr ss μs) - (hm : ss 0 = t) : ∃ ss', lts.ωTr ss' (μl ++ω μs) ∧ ss' 0 = s ∧ ss' μl.length = t := by - obtain ⟨sl, _, _, _, _⟩ := LTS.MTr.exists_states hmtr - refine ⟨sl ++ω ss.drop 1, ?_, by grind [get_append_left], by grind [get_append_left]⟩ - intro n - by_cases n < μl.length - · grind [get_append_left] - · by_cases n = μl.length +theorem LTS.ωTr.append + (hmtr : lts.MTr s μl t) (hωtr : lts.ωTr ss μs) (hm : ss 0 = t) : + ∃ ss', lts.ωTr ss' (μl ++ω μs) ∧ ss' 0 = s ∧ ss' μl.length = t ∧ ss'.drop μl.length = ss := by + obtain ⟨sl, _, _, _, _⟩ := LTS.mTr_isExecution hmtr + use sl.take μl.length ++ω ss + split_ands + · intro n + by_cases n < μl.length · grind [get_append_left] - · grind [get_append_right', hωtr (n - μl.length - 1)] + · by_cases n = μl.length + · grind [get_append_left, get_append_right'] + · grind [get_append_right', hωtr (n - μl.length - 1)] + · grind [get_append_left] + · grind [get_append_left] + · grind [drop_append_of_ge_length] + +open Nat in +/-- Concatenating an infinite sequence of finite executions. -/ +theorem LTS.IsExecution.flatten [Inhabited Label] + {ts : ωSequence State} {μls : ωSequence (List Label)} {sls : ωSequence (List State)} + (hexec : ∀ k, lts.IsExecution (ts k) (μls k) (ts (k + 1)) (sls k)) + (hpos : ∀ k, (μls k).length > 0) : + ∃ ss, lts.ωTr ss μls.flatten ∧ + ∀ k, ss.extract (μls.cumLen k) (μls.cumLen (k + 1)) = (sls k).take (μls k).length := by + have : Inhabited State := by exact {default := ts 0} + let segs := ωSequence.mk fun k ↦ (sls k).take (μls k).length + have h_len : μls.cumLen = segs.cumLen := by ext k; induction k <;> grind + have h_pos (k : ℕ) : (segs k).length > 0 := by grind [List.eq_nil_iff_length_eq_zero] + have h_mono := cumLen_strictMono h_pos + have h_zero := cumLen_zero (ls := segs) + have h_seg0 (k : ℕ) : (segs k)[0]! = ts k := by grind + use segs.flatten + split_ands + · intro n + simp only [h_len, flatten_def] + simp only [LTS.IsExecution] at hexec + have := segment_lower_bound h_mono h_zero n + by_cases h_n : n + 1 < segs.cumLen (segment segs.cumLen n + 1) + · have := segment_range_val h_mono (by grind) h_n + have : n + 1 - segs.cumLen (segment segs.cumLen n) < (μls (segment segs.cumLen n)).length := + by grind + grind + · have h1 : segs.cumLen (segment segs.cumLen n + 1) = n + 1 := by + grind [segment_upper_bound h_mono h_zero n] + have h2 : segment segs.cumLen (n + 1) = segment segs.cumLen n + 1 := by + simp [← h1, segment_idem h_mono] + have : n + 1 - segs.cumLen (segment segs.cumLen n) = (μls (segment segs.cumLen n)).length := + by grind + have h3 : ts (segment segs.cumLen n + 1) = + (sls (segment segs.cumLen n))[n + 1 - segs.cumLen (segment segs.cumLen n)]! := by + grind + simp [h1, h2, h_seg0, h3] + grind + · simp [h_len, extract_flatten h_pos, segs] + +/-- Concatenating an infinite sequence of multistep transitions. -/ +theorem LTS.ωTr.flatten [Inhabited Label] {ts : ωSequence State} {μls : ωSequence (List Label)} + (hmtr : ∀ k, lts.MTr (ts k) (μls k) (ts (k + 1))) (hpos : ∀ k, (μls k).length > 0) : + ∃ ss, lts.ωTr ss μls.flatten ∧ ∀ k, ss (μls.cumLen k) = ts k := by + choose sls h_sls using fun k ↦ LTS.mTr_isExecution (hmtr k) + obtain ⟨ss, h_ss, h_seg⟩ := LTS.IsExecution.flatten h_sls hpos + use ss, h_ss + intro k + have h1 : 0 < (ss.extract (μls.cumLen k) (μls.cumLen (k + 1))).length := by grind + grind [List.getElem_of_eq (h_seg k) h1] end ωMultiStep @@ -319,7 +449,7 @@ theorem LTS.Total.mTr_ωTr [Inhabited Label] [ht : lts.Total] {μl : List Label} (hm : lts.MTr s μl t) : ∃ μs ss, lts.ωTr ss (μl ++ω μs) ∧ ss 0 = s ∧ ss μl.length = t := by let μs : ωSequence Label := .const default obtain ⟨ss', ho, h0⟩ := LTS.Total.ωTr_exists (h := ht) t μs - refine ⟨μs, LTS.ωTr.append hm ho h0⟩ + grind [LTS.ωTr.append hm ho h0] /-- `LTS.totalize` constructs a total LTS from any given LTS by adding a sink state. -/ def LTS.totalize (lts : LTS State Label) : LTS (State ⊕ Unit) Label where @@ -478,7 +608,6 @@ theorem LTS.mem_setImage {lts : LTS State Label} : simp only [setImage, Set.mem_iUnion, exists_prop] grind -@[scoped grind →] theorem LTS.tr_setImage {lts : LTS State Label} (hs : s ∈ S) (htr : lts.Tr s μ s') : s' ∈ lts.setImage S μ := by grind @@ -609,7 +738,6 @@ theorem LTS.saturate_tr_sTr [HasTau Label] {lts : LTS State Label} : lts.saturate.Tr = lts.STr := by rfl /-- Any transition is also a saturated transition. -/ -@[scoped grind →] theorem LTS.STr.single [HasTau Label] (lts : LTS State Label) : lts.Tr s μ s' → lts.STr s μ s' := by intro h diff --git a/Cslib/Foundations/Semantics/LTS/Bisimulation.lean b/Cslib/Foundations/Semantics/LTS/Bisimulation.lean index 412a1559f..bfe283f0a 100644 --- a/Cslib/Foundations/Semantics/LTS/Bisimulation.lean +++ b/Cslib/Foundations/Semantics/LTS/Bisimulation.lean @@ -41,14 +41,11 @@ bisimulation. related by some weak bisimulation on `lts`. - `lts.IsSWBisimulation` is a more convenient definition for establishing weak bisimulations, which we prove to be sound and complete. -- `SWBisimilarity lts` is the binary relation on the states of `lts` that relates any two states -related by some sw-bisimulation on `lts`. ## Notations - `s1 ~[lts] s2`: the states `s1` and `s2` are bisimilar in the LTS `lts`. - `s1 ≈[lts] s2`: the states `s1` and `s2` are weakly bisimilar in the LTS `lts`. -- `s1 ≈sw[lts] s2`: the states `s1` and `s2` are sw bisimilar in the LTS `lts`. ## Main statements @@ -62,10 +59,9 @@ related by some sw-bisimulation on `lts`. trace equivalent (see `TraceEq`). - `Bisimilarity.deterministic_bisim_eq_traceEq`: in a deterministic LTS, bisimilarity and trace equivalence coincide. -- `WeakBisimilarity.weakBisim_eq_swBisim`: weak bisimilarity and sw-bisimilarity coincide in all -LTSs. +- `Bisimilarity.symm_simulation`: bisimilarity can be characterized through symmetric simulations. +- `WeakBisimilarity.weakBisim_eq_swBisim`: weak bisimulation and sw-bisimulation coincide. - `WeakBisimilarity.eqv`: weak bisimilarity is an equivalence relation. -- `SWBisimilarity.eqv`: sw-bisimilarity is an equivalence relation. -/ @@ -156,6 +152,11 @@ theorem Bisimilarity.eqv : trans := Bisimilarity.trans } +instance : IsEquiv State (Bisimilarity lts) where + refl := Bisimilarity.refl + symm _ _ := Bisimilarity.symm + trans _ _ _ := Bisimilarity.trans + /-- The union of two bisimulations is a bisimulation. -/ @[scoped grind .] theorem Bisimulation.union (hrb : lts.IsBisimulation r) (hsb : lts.IsBisimulation s) : @@ -756,6 +757,26 @@ theorem Bisimulation.simulation_iff {lts : LTS State Label} {r : State → State have _ (s1 s2) : r s1 s2 → flip r s2 s1 := id grind [Simulation, flip] +open LTS in +/-- Bisimilarity can also be characterized through symmetric simulations. -/ +theorem Bisimilarity.symm_simulation {lts : LTS State Label} : + Bisimilarity lts = + fun s1 s2 => ∃ r, r s1 s2 ∧ Std.Symm r ∧ Simulation lts r := by + funext s1 s2 + apply Iff.eq + apply Iff.intro + · intro h + have bisim : Bisimilarity lts s1 s2 ∧ Std.Symm (Bisimilarity lts) + ∧ Simulation lts (Bisimilarity lts) := by + grind [Std.Symm, Bisimilarity.symm, Bisimulation.is_simulation] + grind + · intro ⟨r, hr, hsymm, hsim⟩ + have : r = (flip r) := by + grind [flip, Std.Symm] + have : lts.IsBisimulation r := by + grind [Bisimulation.simulation_iff] + grind + end Bisimulation section WeakBisimulation @@ -786,14 +807,6 @@ def LTS.IsSWBisimulation [HasTau Label] (lts : LTS State Label) (r : State → S (∀ s2', lts.Tr s2 μ s2' → ∃ s1', lts.STr s1 μ s1' ∧ r s1' s2') ) -/-- Two states are sw-bisimilar if they are related by some sw-bisimulation. -/ -def SWBisimilarity [HasTau Label] (lts : LTS State Label) : State → State → Prop := - fun s1 s2 => - ∃ r : State → State → Prop, r s1 s2 ∧ lts.IsSWBisimulation r - -/-- Notation for swbisimilarity. -/ -notation s:max " ≈sw[" lts "] " s':max => SWBisimilarity lts s s' - /-- Utility theorem for 'following' internal transitions using an `SWBisimulation` (first component, weighted version). -/ theorem SWBisimulation.follow_internal_fst_n @@ -929,112 +942,34 @@ theorem SWBisimulation.toWeakBisimulation [HasTau Label] {lts : LTS State Label} {r : State → State → Prop} (h : lts.IsSWBisimulation r) : lts.IsWeakBisimulation r := LTS.isWeakBisimulation_iff_isSWBisimulation.2 h -/-- If two states are related by an `SWBisimulation`, then they are weakly bisimilar. -/ -theorem WeakBisimilarity.by_swBisimulation [HasTau Label] - (lts : LTS State Label) (r : State → State → Prop) - (hb : lts.IsSWBisimulation r) (hr : r s1 s2) : s1 ≈[lts] s2 := by - exists r - constructor - · exact hr - apply LTS.isWeakBisimulation_iff_isSWBisimulation.2 hb - -/-- Weak bisimilarity and sw-bisimilarity coincide for all LTSs. -/ -@[scoped grind _=_] +/-- Weak bisimilarity can also be characterized through sw-bisimulations. -/ +@[scoped grind =] theorem WeakBisimilarity.weakBisim_eq_swBisim [HasTau Label] (lts : LTS State Label) : - WeakBisimilarity lts = SWBisimilarity lts := by - funext s1 s2 - simp only [eq_iff_iff] - constructor - case mp => - intro h - obtain ⟨r, hr, hrh⟩ := h - exists r - constructor - · exact hr - apply LTS.isWeakBisimulation_iff_isSWBisimulation.1 hrh - case mpr => - intro h - obtain ⟨r, hr, hrh⟩ := h - exists r - constructor - · exact hr - apply LTS.isWeakBisimulation_iff_isSWBisimulation.2 hrh - -/-- sw-bisimilarity is reflexive. -/ -theorem SWBisimilarity.refl [HasTau Label] (lts : LTS State Label) (s : State) : s ≈sw[lts] s := by - exists Eq - constructor - · rfl - simp only [LTS.IsSWBisimulation] - intro s1 s2 hr μ - cases hr - constructor - case left => - intro s1' htr - exists s1' - constructor - · apply LTS.STr.single _ htr - · constructor - case right => - intro s2' htr - exists s2' - constructor - · apply LTS.STr.single _ htr - · constructor + WeakBisimilarity lts = + fun s1 s2 => ∃ r : State → State → Prop, r s1 s2 ∧ lts.IsSWBisimulation r := by + grind [WeakBisimilarity, LTS.isWeakBisimulation_iff_isSWBisimulation.1, + LTS.isWeakBisimulation_iff_isSWBisimulation.2] /-- Weak bisimilarity is reflexive. -/ -theorem WeakBisimilarity.refl [HasTau Label] (lts : LTS State Label) (s : State) : s ≈[lts] s := by +theorem WeakBisimilarity.refl [HasTau Label] (lts : LTS State Label) (s : State) : + s ≈[lts] s := by rw [WeakBisimilarity.weakBisim_eq_swBisim lts] - apply SWBisimilarity.refl - -/-- The inverse of an sw-bisimulation is an sw-bisimulation. -/ -theorem SWBisimulation.inv [HasTau Label] (lts : LTS State Label) - (r : State → State → Prop) (h : lts.IsSWBisimulation r) : - lts.IsSWBisimulation (flip r) := by - simp only [LTS.IsSWBisimulation] at h - simp only [LTS.IsSWBisimulation] - intro s1 s2 hrinv μ - constructor - case left => - intro s1' htr - specialize h hrinv μ - have h' := h.2 s1' htr - obtain ⟨ s2', h' ⟩ := h' - exists s2' - case right => - intro s2' htr - specialize h hrinv μ - have h' := h.1 s2' htr - obtain ⟨ s1', h' ⟩ := h' - exists s1' + exists Eq + grind [LTS.IsSWBisimulation, LTS.STr.single] /-- The inverse of a weak bisimulation is a weak bisimulation. -/ theorem WeakBisimulation.inv [HasTau Label] (lts : LTS State Label) (r : State → State → Prop) (h : lts.IsWeakBisimulation r) : lts.IsWeakBisimulation (flip r) := by - apply WeakBisimulation.toSwBisimulation at h - have h' := SWBisimulation.inv lts r h - apply SWBisimulation.toWeakBisimulation at h' - exact h' - -/-- sw-bisimilarity is symmetric. -/ -theorem SWBisimilarity.symm [HasTau Label] (lts : LTS State Label) (h : s1 ≈sw[lts] s2) : - s2 ≈sw[lts] s1 := by - obtain ⟨r, hr, hrh⟩ := h - exists (flip r) - constructor - case left => - simp only [flip, flip] - exact hr - case right => - apply SWBisimulation.inv lts r hrh + grind [WeakBisimulation.toSwBisimulation, LTS.IsSWBisimulation, + flip, SWBisimulation.toWeakBisimulation] /-- Weak bisimilarity is symmetric. -/ theorem WeakBisimilarity.symm [HasTau Label] (lts : LTS State Label) (h : s1 ≈[lts] s2) : s2 ≈[lts] s1 := by - rw [WeakBisimilarity.weakBisim_eq_swBisim] - rw [WeakBisimilarity.weakBisim_eq_swBisim] at h - apply SWBisimilarity.symm lts h + obtain ⟨r, hr, hrh⟩ := h + exists (flip r) + grind [WeakBisimulation.inv, flip] /-- The composition of two weak bisimulations is a weak bisimulation. -/ theorem WeakBisimulation.comp @@ -1051,9 +986,7 @@ theorem SWBisimulation.comp (lts : LTS State Label) (r1 r2 : State → State → Prop) (h1 : lts.IsSWBisimulation r1) (h2 : lts.IsSWBisimulation r2) : lts.IsSWBisimulation (Relation.Comp r1 r2) := by - apply SWBisimulation.toWeakBisimulation at h1 - apply SWBisimulation.toWeakBisimulation at h2 - apply LTS.isWeakBisimulation_iff_isSWBisimulation.1 + simp_all only [LTS.isWeakBisimulation_iff_isSWBisimulation.symm] apply WeakBisimulation.comp lts r1 r2 h1 h2 /-- Weak bisimilarity is transitive. -/ @@ -1068,12 +1001,6 @@ theorem WeakBisimilarity.trans [HasTau Label] {s1 s2 s3 : State} case right => apply WeakBisimulation.comp lts r1 r2 hr1b hr2b -/-- sw-bisimilarity is transitive. -/ -theorem SWBisimilarity.trans [HasTau Label] {s1 s2 s3 : State} - (lts : LTS State Label) (h1 : s1 ≈sw[lts] s2) (h2 : s2 ≈sw[lts] s3) : s1 ≈sw[lts] s3 := by - rw [← (WeakBisimilarity.weakBisim_eq_swBisim lts)] at * - apply WeakBisimilarity.trans lts h1 h2 - /-- Weak bisimilarity is an equivalence relation. -/ theorem WeakBisimilarity.eqv [HasTau Label] {lts : LTS State Label} : Equivalence (WeakBisimilarity lts) := { @@ -1082,14 +1009,6 @@ theorem WeakBisimilarity.eqv [HasTau Label] {lts : LTS State Label} : trans := WeakBisimilarity.trans lts } -/-- SW-bisimilarity is an equivalence relation. -/ -theorem SWBisimilarity.eqv [HasTau Label] {lts : LTS State Label} : - Equivalence (SWBisimilarity lts) := { - refl := SWBisimilarity.refl lts - symm := SWBisimilarity.symm lts - trans := SWBisimilarity.trans lts - } - end WeakBisimulation end Cslib diff --git a/Cslib/Foundations/Semantics/ReductionSystem/Basic.lean b/Cslib/Foundations/Semantics/ReductionSystem/Basic.lean deleted file mode 100644 index 5aad8a38b..000000000 --- a/Cslib/Foundations/Semantics/ReductionSystem/Basic.lean +++ /dev/null @@ -1,148 +0,0 @@ -/- -Copyright (c) 2025 Fabrizio Montesi. All rights reserved. -Released under Apache 2.0 license as described in the file LICENSE. -Authors: Fabrizio Montesi, Thomas Waring --/ - -module - -public import Cslib.Init -public import Mathlib.Logic.Relation -public import Mathlib.Util.Notation3 - -@[expose] public section - -/-! -# Reduction System - -A reduction system is an operational semantics expressed as a relation between terms. --/ - -namespace Cslib - -universe u - -/-- -A reduction system is a relation between `Term`s. --/ -structure ReductionSystem (Term : Type u) where - /-- The reduction relation. -/ - Red : Term → Term → Prop - - -section MultiStep - -/-! ## Multi-step reductions -/ - -/-- Multi-step reduction relation. -/ -abbrev ReductionSystem.MRed (rs : ReductionSystem Term) := - Relation.ReflTransGen rs.Red - -/-- All multi-step reduction relations are reflexive. -/ -@[refl] -theorem ReductionSystem.MRed.refl (rs : ReductionSystem Term) (t : Term) : rs.MRed t t := - Relation.ReflTransGen.refl - -/-- Any reduction is a multi-step -/ -theorem ReductionSystem.MRed.single (rs : ReductionSystem Term) (h : rs.Red a b) : - rs.MRed a b := - Relation.ReflTransGen.single h - -end MultiStep - -open Lean Elab Meta Command Term - --- thank you to Kyle Miller for this: --- https://leanprover.zulipchat.com/#narrow/channel/239415-metaprogramming-.2F-tactics/topic/Working.20with.20variables.20in.20a.20command/near/529324084 - -/-- A command to create a `ReductionSystem` from a relation, robust to use of `variable `-/ -elab "create_reduction_sys" rel:ident name:ident : command => do - liftTermElabM do - let rel ← realizeGlobalConstNoOverloadWithInfo rel - let ci ← getConstInfo rel - forallTelescope ci.type fun args ty => do - let throwNotRelation := throwError m!"type{indentExpr ci.type}\nis not a relation" - unless args.size ≥ 2 do - throwNotRelation - unless ← isDefEq (← inferType args[args.size - 2]!) (← inferType args[args.size - 1]!) do - throwNotRelation - unless (← whnf ty).isProp do - throwError m!"expecting Prop, not{indentExpr ty}" - let params := ci.levelParams.map .param - let rel := mkAppN (.const rel params) args[0:args.size-2] - let bundle ← mkAppM ``ReductionSystem.mk #[rel] - let value ← mkLambdaFVars args[0:args.size-2] bundle - let type ← inferType value - addAndCompile <| .defnDecl { - name := name.getId - levelParams := ci.levelParams - type - value - safety := .safe - hints := Lean.ReducibilityHints.abbrev - } - addTermInfo' name (.const name.getId params) (isBinder := true) - addDeclarationRangesFromSyntax name.getId name - -/-- - This command adds notations for a `ReductionSystem.Red` and `ReductionSystem.MRed`. This should - not usually be called directly, but from the `reduction_sys` attribute. - - As an example `reduction_notation foo "β"` will add the notations "⭢β" and "↠β". - - Note that the string used will afterwards be registered as a notation. This means that if you have - also used this as a constructor name, you will need quotes to access corresponding cases, e.g. «β» - in the above example. --/ -syntax attrKind "reduction_notation" ident (str)? : command -macro_rules - | `($kind:attrKind reduction_notation $rs $sym) => - `( - @[nolint docBlame] - $kind:attrKind notation3 t:39 " ⭢" $sym:str t':39 => (ReductionSystem.Red $rs) t t' - @[nolint docBlame] - $kind:attrKind notation3 t:39 " ↠" $sym:str t':39 => (ReductionSystem.MRed $rs) t t' - ) - | `($kind:attrKind reduction_notation $rs) => - `( - @[nolint docBlame] - $kind:attrKind notation3 t:39 " ⭢ " t':39 => (ReductionSystem.Red $rs) t t' - @[nolint docBlame] - $kind:attrKind notation3 t:39 " ↠ " t':39 => (ReductionSystem.MRed $rs) t t' - ) - - -/-- - This attribute calls the `reduction_notation` command for the annotated declaration, such as in: - - ``` - @[reduction rs "ₙ", simp] - def PredReduction (a b : ℕ) : Prop := a = b + 1 - ``` --/ -syntax (name := reduction_sys) "reduction_sys" ident (ppSpace str)? : attr - -initialize Lean.registerBuiltinAttribute { - name := `reduction_sys - descr := "Register notation for a relation and its closures." - add := fun decl stx _ => MetaM.run' do - match stx with - | `(attr | reduction_sys $rs $sym) => - let mut sym := sym - unless sym.getString.endsWith " " do - sym := Syntax.mkStrLit (sym.getString ++ " ") - let rs := rs.getId.updatePrefix decl.getPrefix |> Lean.mkIdent - liftCommandElabM <| Command.elabCommand (← `(create_reduction_sys $(mkIdent decl) $rs)) - liftCommandElabM <| (do - modifyScope ({ · with currNamespace := decl.getPrefix }) - Command.elabCommand (← `(scoped reduction_notation $rs $sym))) - | `(attr | reduction_sys $rs) => - let rs := rs.getId.updatePrefix decl.getPrefix |> Lean.mkIdent - liftCommandElabM <| Command.elabCommand (← `(create_reduction_sys $(mkIdent decl) $rs)) - liftCommandElabM <| (do - modifyScope ({ · with currNamespace := decl.getPrefix }) - Command.elabCommand (← `(scoped reduction_notation $rs))) - | _ => throwError "invalid syntax for 'reduction_sys' attribute" -} - -end Cslib diff --git a/Cslib/Foundations/Syntax/Congruence.lean b/Cslib/Foundations/Syntax/Congruence.lean new file mode 100644 index 000000000..9d3774f97 --- /dev/null +++ b/Cslib/Foundations/Syntax/Congruence.lean @@ -0,0 +1,20 @@ +/- +Copyright (c) 2026 Fabrizio Montesi. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Fabrizio Montesi +-/ + +module + +public import Cslib.Foundations.Syntax.Context +public import Mathlib.Algebra.Order.Monoid.Unbundled.Defs + +@[expose] public section + +namespace Cslib + +/-- An equivalence relation preserved by all contexts. -/ +class Congruence (α : Type*) [HasContext α] (r : α → α → Prop) extends + IsEquiv α r, covariant : CovariantClass (HasContext.Context α) α (·[·]) r + +end Cslib diff --git a/Cslib/Foundations/Syntax/Context.lean b/Cslib/Foundations/Syntax/Context.lean new file mode 100644 index 000000000..017abadbe --- /dev/null +++ b/Cslib/Foundations/Syntax/Context.lean @@ -0,0 +1,24 @@ +/- +Copyright (c) 2026 Fabrizio Montesi. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Fabrizio Montesi +-/ + +module + +public import Cslib.Init + +@[expose] public section + +namespace Cslib + +/-- Class for types (`Term`) that have a notion of (single-hole) contexts (`Context`). -/ +class HasContext (Term : Sort*) where + /-- The type of contexts. -/ + Context : Sort* + /-- Replaces the hole in the context with a term. -/ + fill (c : Context) (t : Term) : Term + +@[inherit_doc] notation:max c "[" t "]" => HasContext.fill c t + +end Cslib diff --git a/Cslib/Languages/CCS/Basic.lean b/Cslib/Languages/CCS/Basic.lean index 776a4d863..ca40c9950 100644 --- a/Cslib/Languages/CCS/Basic.lean +++ b/Cslib/Languages/CCS/Basic.lean @@ -6,7 +6,8 @@ Authors: Fabrizio Montesi module -public import Cslib.Init +public import Cslib.Foundations.Syntax.Context +public import Cslib.Foundations.Syntax.Congruence @[expose] public section @@ -29,9 +30,7 @@ option of constant definitions (K = P). * [D. Sangiorgi, *Introduction to Bisimulation and Coinduction*][Sangiorgi2011] -/ -namespace Cslib - -namespace CCS +namespace Cslib.CCS universe u v @@ -55,32 +54,32 @@ deriving DecidableEq namespace Act /-- An action is visible if it a name or a coname. -/ -@[grind] +@[scoped grind] inductive IsVisible : Act Name → Prop where | name : IsVisible (Act.name a) | coname : IsVisible (Act.coname a) /-- If an action is visible, it is not `τ`. -/ -@[grind →, simp] +@[scoped grind →, simp] theorem isVisible_neq_τ {μ : Act Name} (h : μ.IsVisible) : μ ≠ Act.τ := by cases μ <;> grind /-- Checks that an action is the coaction of another. -/ -@[grind] +@[scoped grind] inductive Co {Name : Type u} : Act Name → Act Name → Prop where | nc : Co (name a) (coname a) | cn : Co (coname a) (name a) /-- `Act.Co` is symmetric. -/ -@[grind →, symm] +@[scoped grind →, symm] theorem Co.symm (h : Act.Co μ μ') : Act.Co μ' μ := by grind /-- If two actions are one the coaction of the other, then they are both visible. -/ -@[grind →] +@[scoped grind →] theorem co_isVisible (h : Act.Co μ μ') : μ.IsVisible ∧ μ'.IsVisible := by grind /-- Checks that an action is the coaction of another. This is the Boolean version of `Act.Co`. -/ -@[grind =] +@[scoped grind =] def isCo [DecidableEq Name] (μ μ' : Act Name) : Bool := match μ, μ' with | name a, coname b | coname a, name b => a = b @@ -96,7 +95,7 @@ instance [DecidableEq Name] {μ μ' : Act Name} : Decidable (Co μ μ') := end Act /-- Contexts. -/ -@[grind] +@[scoped grind] inductive Context (Name : Type u) (Constant : Type v) : Type (max u v) where | hole | pre (μ : Act Name) (c : Context Name Constant) @@ -108,7 +107,7 @@ inductive Context (Name : Type u) (Constant : Type v) : Type (max u v) where deriving DecidableEq /-- Replaces the hole in a `Context` with a `Process`. -/ -@[grind] +@[scoped grind =] def Context.fill (c : Context Name Constant) (p : Process Name Constant) : Process Name Constant := match c with | hole => p @@ -119,11 +118,18 @@ def Context.fill (c : Context Name Constant) (p : Process Name Constant) : Proce | choiceR r c => Process.choice r (c.fill p) | res a c => Process.res a (c.fill p) +instance : HasContext (Process Name Constant) := ⟨Context Name Constant, Context.fill⟩ + +/-- Definition of context filling. -/ +@[scoped grind =] +theorem isContext_def (c : Context Name Constant) (p : Process Name Constant) : + c[p] = c.fill p := rfl + /-- Any `Process` can be obtained by filling a `Context` with an atom. This proves that `Context` is a complete formalisation of syntactic contexts for CCS. -/ theorem Context.complete (p : Process Name Constant) : - ∃ c : Context Name Constant, p = (c.fill Process.nil) ∨ - ∃ k : Constant, p = c.fill (Process.const k) := by + ∃ c : Context Name Constant, p = c[Process.nil] ∨ + ∃ k : Constant, p = c[Process.const k] := by induction p case nil => exists hole @@ -148,6 +154,4 @@ theorem Context.complete (p : Process Name Constant) : exists hole grind -end CCS - -end Cslib +end Cslib.CCS diff --git a/Cslib/Languages/CCS/BehaviouralTheory.lean b/Cslib/Languages/CCS/BehaviouralTheory.lean index 3cbdf1f7c..ce34edcee 100644 --- a/Cslib/Languages/CCS/BehaviouralTheory.lean +++ b/Cslib/Languages/CCS/BehaviouralTheory.lean @@ -15,7 +15,7 @@ public import Cslib.Languages.CCS.Semantics ## Main results -- `CCS.bisimilarity_congr`: bisimilarity is a congruence in CCS +- `CCS.bisimilarityCongruence`: bisimilarity is a congruence in CCS. Additionally, some standard laws of bisimilarity for CCS, including: - `CCS.bisimilarity_par_nil`: P | 𝟎 ~ P. @@ -29,10 +29,10 @@ section CCS.BehaviouralTheory variable {Name : Type u} {Constant : Type v} {defs : Constant → CCS.Process Name Constant → Prop} -open CCS CCS.Process CCS.Act - namespace CCS +open Process Act Act.Co Context + attribute [local grind] Tr @[local grind] @@ -414,8 +414,8 @@ theorem bisimilarity_congr_par : grind /-- Bisimilarity is a congruence in CCS. -/ -theorem bisimilarity_congr - (c : Context Name Constant) (p q : Process Name Constant) (h : p ~[lts (defs := defs)] q) : +theorem bisimilarity_is_congruence + (p q : Process Name Constant) (c : Context Name Constant) (h : p ~[lts (defs := defs)] q) : (c.fill p) ~[lts (defs := defs)] (c.fill q) := by induction c with | parR r c _ => @@ -431,6 +431,11 @@ theorem bisimilarity_congr | _ => grind [bisimilarity_congr_pre, bisimilarity_congr_par, bisimilarity_congr_choice, bisimilarity_congr_res] +/-- Bisimilarity is a congruence in CCS. -/ +instance bisimilarityCongruence : + Congruence (Process Name Constant) (Bisimilarity (lts (defs := defs))) where + covariant := ⟨by grind [Covariant, bisimilarity_is_congruence]⟩ + end CCS end CCS.BehaviouralTheory diff --git a/Cslib/Languages/CombinatoryLogic/Basic.lean b/Cslib/Languages/CombinatoryLogic/Basic.lean index 09897c5e5..e8743062a 100644 --- a/Cslib/Languages/CombinatoryLogic/Basic.lean +++ b/Cslib/Languages/CombinatoryLogic/Basic.lean @@ -38,7 +38,7 @@ namespace Cslib namespace SKI -open Red MRed +open Red MRed Relation /-! ### Polynomials and the bracket astraction algorithm -/ @@ -60,9 +60,9 @@ instance CoeTermPolynomial (n : Nat) : Coe SKI (SKI.Polynomial n) := ⟨SKI.Poly def Polynomial.eval {n : Nat} (Γ : SKI.Polynomial n) (l : List SKI) (hl : List.length l = n) : SKI := match Γ with - | SKI.Polynomial.term x => x - | SKI.Polynomial.var i => l[i] - | SKI.Polynomial.app Γ Δ => (Γ.eval l hl) ⬝ (Δ.eval l hl) + | .term x => x + | .var i => l[i] + | .app Γ Δ => (Γ.eval l hl) ⬝ (Δ.eval l hl) /-- A polynomial with no free variables is a term -/ def Polynomial.varFreeToSKI (Γ : SKI.Polynomial 0) : SKI := Γ.eval [] (by trivial) @@ -71,15 +71,15 @@ def Polynomial.varFreeToSKI (Γ : SKI.Polynomial 0) : SKI := Γ.eval [] (by triv defined reduction on polynomials) `Γ' ⬝ t ↠ Γ[xₙ ← t]`. -/ def Polynomial.elimVar {n : Nat} : SKI.Polynomial (n+1) → SKI.Polynomial n /- The K-combinator leaves plain terms unchanged by substitution `K ⬝ x ⬝ t ⇒ x` -/ - | SKI.Polynomial.term x => K ⬝' x + | .term x => K ⬝' x /- Variables other than `xₙ` use the K-combinator as above, for `xₙ` we use `I`. -/ - | SKI.Polynomial.var i => by + | .var i => by by_cases i - exact K ⬝' (SKI.Polynomial.var <| @Fin.ofNat n ⟨Nat.ne_zero_of_lt h⟩ i) + exact K ⬝' (.var <| @Fin.ofNat n ⟨Nat.ne_zero_of_lt h⟩ i) case neg h => exact ↑I /- The S-combinator inductively applies the substitution to the subterms of an application. -/ - | SKI.Polynomial.app Γ Δ => S ⬝' Γ.elimVar ⬝' Δ.elimVar + | .app Γ Δ => S ⬝' Γ.elimVar ⬝' Δ.elimVar /-- @@ -94,50 +94,42 @@ theorem Polynomial.elimVar_correct {n : Nat} (Γ : SKI.Polynomial (n + 1)) {ys : (by rw [List.length_append, hys, List.length_singleton]) := by match n, Γ with - | _, SKI.Polynomial.term x => + | _, .term x => rw [SKI.Polynomial.elimVar, SKI.Polynomial.eval] exact MRed.K _ _ - | _, SKI.Polynomial.app Γ Δ => + | _, .app Γ Δ => rw [SKI.Polynomial.elimVar, SKI.Polynomial.eval] trans Γ.elimVar.eval ys hys ⬝ z ⬝ (Δ.elimVar.eval ys hys ⬝ z) · exact MRed.S _ _ _ · apply parallel_mRed · exact elimVar_correct Γ hys z · exact elimVar_correct Δ hys z - | n, SKI.Polynomial.var i => + | n, .var i => rw [SKI.Polynomial.elimVar] split_ifs with hi - /- This part is quite messy because of the list indexing: possibly it could be cleaned up. -/ - · simp_rw [SKI.Polynomial.eval] - have h : (ys ++ [z])[i]'(by simp [hys]) = ys[↑i] := by - simp only [Fin.getElem_fin] - rw [List.getElem_append_left] - rw [h] - simp_rw [Fin.getElem_fin, Fin.val_ofNat, Nat.mod_eq_of_lt hi] + · have h : (ys ++ [z])[i]'(by simp [hys]) = ys[↑i] := by grind + simp_rw [SKI.Polynomial.eval, h, Fin.getElem_fin, Fin.val_ofNat, Nat.mod_eq_of_lt hi] exact MRed.K _ _ - · simp_rw [SKI.Polynomial.eval] - replace hi := Nat.eq_of_lt_succ_of_not_lt i.isLt hi - simp_rw [Fin.getElem_fin, hi] - have app_len : (ys ++ [z]).length = n+1 := by simpa + · replace hi := Nat.eq_of_lt_succ_of_not_lt i.isLt hi + have app_len : (ys ++ [z]).length = n + 1 := by simpa have : (ys ++ [z])[n]'(by rw [app_len]; exact Nat.lt_add_one n) = z := by rw [List.getElem_append_right] <;> simp [hys] - rw [this] + simp_rw [SKI.Polynomial.eval, Fin.getElem_fin, hi, this] exact MRed.I _ /-- Bracket abstraction, by induction using `SKI.Polynomial.elimVar` -/ def Polynomial.toSKI {n : Nat} (Γ : SKI.Polynomial n) : SKI := match n with | 0 => Γ.varFreeToSKI - | _+1 => Γ.elimVar.toSKI + | _ + 1 => Γ.elimVar.toSKI /-- Correctness for the toSKI (bracket abstraction) algorithm. -/ theorem Polynomial.toSKI_correct {n : Nat} (Γ : SKI.Polynomial n) (xs : List SKI) (hxs : xs.length = n) : Γ.toSKI.applyList xs ↠ Γ.eval xs hxs := by match n with | 0 => - unfold toSKI varFreeToSKI applyList rw [List.length_eq_zero_iff] at hxs - simp_rw [hxs, List.foldl_nil] + simp_rw [hxs, applyList, List.foldl_nil] rfl | n+1 => -- show that xs = ys + [z] @@ -147,18 +139,16 @@ theorem Polynomial.toSKI_correct {n : Nat} (Γ : SKI.Polynomial n) (xs : List SK simp_rw [this, false_or, List.concat_eq_append] at h replace ⟨ys, z, h⟩ := h -- apply inductive step, using elimVar_correct - unfold toSKI have : ys.length = n := by - replace h := congr_arg List.length <| h + replace h := congr_arg List.length h simp_rw [List.length_append, List.length_singleton, hxs] at h - exact Nat.succ_inj.mp (id (Eq.symm h)) + exact Nat.succ_inj.mp h.symm simp_rw [h, applyList_concat] trans Γ.elimVar.eval ys this ⬝ z · apply MRed.head exact SKI.Polynomial.toSKI_correct Γ.elimVar ys this · exact SKI.Polynomial.elimVar_correct Γ this z - /-! ### Basic auxiliary combinators. @@ -175,7 +165,6 @@ def R : SKI := RPoly.toSKI theorem R_def (x y : SKI) : (R ⬝ x ⬝ y) ↠ y ⬝ x := RPoly.toSKI_correct [x, y] (by simp) - /-- Composition: B := λ f g x. f (g x) -/ def BPoly : SKI.Polynomial 3 := &0 ⬝' (&1 ⬝' &2) /-- A SKI term representing B -/ @@ -183,6 +172,9 @@ def B : SKI := BPoly.toSKI theorem B_def (f g x : SKI) : (B ⬝ f ⬝ g ⬝ x) ↠ f ⬝ (g ⬝ x) := BPoly.toSKI_correct [f, g, x] (by simp) +/-- B followed by tail reduction -/ +lemma B_tail_mred (f g x y : SKI) (h : (g ⬝ x) ↠ y) : (B ⬝ f ⬝ g ⬝ x) ↠ f ⬝ y := + Trans.trans (B_def f g x) (MRed.tail f h) /-- C := λ f x y. f y x -/ def CPoly : SKI.Polynomial 3 := &0 ⬝' &2 ⬝' &1 @@ -191,6 +183,9 @@ def C : SKI := CPoly.toSKI theorem C_def (f x y : SKI) : (C ⬝ f ⬝ x ⬝ y) ↠ f ⬝ y ⬝ x := CPoly.toSKI_correct [f, x, y] (by simp) +/-- C followed by head reduction -/ +lemma C_head_mred (f x y z : SKI) (h : (f ⬝ y) ↠ z) : (C ⬝ f ⬝ x ⬝ y) ↠ z ⬝ x := + Trans.trans (C_def f x y) (MRed.head x h) /-- Rotate right: RotR := λ x y z. z x y -/ def RotRPoly : SKI.Polynomial 3 := &2 ⬝' &0 ⬝' &1 @@ -199,7 +194,6 @@ def RotR : SKI := RotRPoly.toSKI theorem rotR_def (x y z : SKI) : (RotR ⬝ x ⬝ y ⬝ z) ↠ z ⬝ x ⬝ y := RotRPoly.toSKI_correct [x, y, z] (by simp) - /-- Rotate left: RotR := λ x y z. y z x -/ def RotLPoly : SKI.Polynomial 3 := &1 ⬝' &2 ⬝' &0 /-- A SKI term representing RotL -/ @@ -207,7 +201,6 @@ def RotL : SKI := RotLPoly.toSKI theorem rotL_def (x y z : SKI) : (RotL ⬝ x ⬝ y ⬝ z) ↠ y ⬝ z ⬝ x := RotLPoly.toSKI_correct [x, y, z] (by simp) - /-- Self application: δ := λ x. x x -/ def DelPoly : SKI.Polynomial 1 := &0 ⬝' &0 /-- A SKI term representing δ -/ @@ -215,7 +208,6 @@ def Del : SKI := DelPoly.toSKI theorem del_def (x : SKI) : (Del ⬝ x) ↠ x ⬝ x := DelPoly.toSKI_correct [x] (by simp) - /-- H := λ f x. f (x x) -/ def HPoly : SKI.Polynomial 2 := &0 ⬝' (&1 ⬝' &1) /-- A SKI term representing H -/ @@ -223,7 +215,6 @@ def H : SKI := HPoly.toSKI theorem H_def (f x : SKI) : (H ⬝ f ⬝ x) ↠ f ⬝ (x ⬝ x) := HPoly.toSKI_correct [f, x] (by simp) - /-- Curry's fixed-point combinator: Y := λ f. H f (H f) -/ def YPoly : SKI.Polynomial 1 := H ⬝' &0 ⬝' (H ⬝' &0) /-- A SKI term representing Y -/ @@ -231,9 +222,8 @@ def Y : SKI := YPoly.toSKI theorem Y_def (f : SKI) : (Y ⬝ f) ↠ H ⬝ f ⬝ (H ⬝ f) := YPoly.toSKI_correct [f] (by simp) - /-- The fixed-point property of the Y-combinator -/ -theorem Y_correct (f : SKI) : CommonReduct (Y ⬝ f) (f ⬝ (Y ⬝ f)) := by +theorem Y_correct (f : SKI) : MJoin Red (Y ⬝ f) (f ⬝ (Y ⬝ f)) := by use f ⬝ (H ⬝ f ⬝ (H ⬝ f)) constructor · exact Trans.trans (Y_def f) (H_def f (H ⬝ f)) @@ -255,13 +245,11 @@ def ThAux : SKI := ThAuxPoly.toSKI theorem ThAux_def (x y : SKI) : (ThAux ⬝ x ⬝ y) ↠ y ⬝ (x ⬝ x ⬝ y) := ThAuxPoly.toSKI_correct [x, y] (by simp) - /-- Turing's fixed-point combinator: Θ := (λ x y. y (x x y)) (λ x y. y (x x y)) -/ def Th : SKI := ThAux ⬝ ThAux /-- A SKI term representing Θ -/ theorem Th_correct (f : SKI) : (Th ⬝ f) ↠ f ⬝ (Th ⬝ f) := ThAux_def ThAux f - /-! ### Church Booleans -/ /-- A term a represents the boolean value u if it is βη-equivalent to a standard Church boolean. -/ @@ -279,10 +267,12 @@ theorem isBool_trans (u : Bool) (a a' : SKI) (h : a ↠ a') (ha' : IsBool u a') /-- Standard true: TT := λ x y. x -/ def TT : SKI := K +@[scoped grind .] theorem TT_correct : IsBool true TT := fun x y ↦ MRed.K x y /-- Standard false: FF := λ x y. y -/ def FF : SKI := K ⬝ I +@[scoped grind .] theorem FF_correct : IsBool false FF := fun x y ↦ calc (FF ⬝ x ⬝ y) ↠ I ⬝ y := by apply Relation.ReflTransGen.single; apply red_head; exact red_K I x @@ -324,6 +314,7 @@ theorem and_correct (a b : SKI) (ua ub : Bool) (ha : IsBool ua a) (hb : IsBool u cases ub · simp [FF_correct] · simp [TT_correct] + /-- Or := λ a b. Cond TT (Cond TT FF b) b -/ def OrPoly : SKI.Polynomial 2 := SKI.Cond ⬝' TT ⬝' (SKI.Cond ⬝ TT ⬝ FF ⬝' &1) ⬝' &0 /-- A SKI term representing Or -/ @@ -344,10 +335,8 @@ theorem or_correct (a b : SKI) (ua ub : Bool) (ha : IsBool ua a) (hb : IsBool ub · apply isBool_trans (a' := TT) (h := cond_correct a _ _ true ha) simp [TT_correct] - /- TODO?: other boolean connectives -/ - /-! ### Pairs -/ /-- MkPair := λ a b. ⟨a,b⟩ -/ @@ -357,10 +346,12 @@ def Fst : SKI := R ⬝ TT /-- Second projection -/ def Snd : SKI := R ⬝ FF +@[scoped grind .] theorem fst_correct (a b : SKI) : (Fst ⬝ (MkPair ⬝ a ⬝ b)) ↠ a := by calc _ ↠ SKI.Cond ⬝ a ⬝ b ⬝ TT := R_def _ _ _ ↠ a := cond_correct TT a b true TT_correct +@[scoped grind .] theorem snd_correct (a b : SKI) : (Snd ⬝ (MkPair ⬝ a ⬝ b)) ↠ b := by calc _ ↠ SKI.Cond ⬝ a ⬝ b ⬝ FF := R_def _ _ _ ↠ b := cond_correct FF a b false FF_correct diff --git a/Cslib/Languages/CombinatoryLogic/Confluence.lean b/Cslib/Languages/CombinatoryLogic/Confluence.lean index 03a790300..0ac995363 100644 --- a/Cslib/Languages/CombinatoryLogic/Confluence.lean +++ b/Cslib/Languages/CombinatoryLogic/Confluence.lean @@ -18,8 +18,8 @@ This file proves the **Church-Rosser** theorem for the SKI calculus, that is, if `a ↠ c`, `b ↠ d` and `c ↠ d` for some term `d`. More strongly (though equivalently), we show that the relation of having a common reduct is transitive — in the above situation, `a` and `b`, and `a` and `c` have common reducts, so the result implies the same of `b` and `c`. Note that -`CommonReduct` is symmetric (trivially) and reflexive (since `↠` is), so we in fact show that -`CommonReduct` is an equivalence. +`MJoin Red` is symmetric (trivially) and reflexive (since `↠` is), so we in fact show that +`MJoin Red` is an equivalence. Our proof follows the method of Tait and Martin-Löf for the lambda calculus, as presented for instance in @@ -28,14 +28,14 @@ Chapter 4 of Peter Selinger's notes: ## Main definitions -- `ParallelReduction` : a relation `⇒ₚ` on terms such that `⇒ ⊆ ⇒ₚ ⊆ ↠`, allowing simultaneous +- `ParallelReduction` : a relation `⭢ₚ` on terms such that `⭢ ⊆ ⭢ₚ ⊆ ↠`, allowing simultaneous reduction on the head and tail of a term. ## Main results - `parallelReduction_diamond` : parallel reduction satisfies the diamond property, that is, it is confluent in a single step. -- `commonReduct_equivalence` : by a general result, the diamond property for `⇒ₚ` implies the same +- `mJoin_red_equivalence` : by a general result, the diamond property for `⭢ₚ` implies the same for its reflexive-transitive closure. This closure is exactly `↠`, which implies the **Church-Rosser** theorem as sketched above. -/ @@ -44,9 +44,10 @@ namespace Cslib namespace SKI -open Red MRed ReductionSystem Relation +open Red MRed Relation /-- A reduction step allowing simultaneous reduction of disjoint redexes -/ +@[reduction_sys "ₚ"] inductive ParallelReduction : SKI → SKI → Prop /-- Parallel reduction is reflexive, -/ | refl (a : SKI) : ParallelReduction a a @@ -58,12 +59,8 @@ inductive ParallelReduction : SKI → SKI → Prop | par ⦃a a' b b' : SKI⦄ : ParallelReduction a a' → ParallelReduction b b' → ParallelReduction (a ⬝ b) (a' ⬝ b') - -/-- Notation for parallel reduction -/ -scoped infix:90 " ⇒ₚ " => ParallelReduction - -/-- The inclusion `⇒ₚ ⊆ ↠` -/ -theorem mRed_of_parallelReduction {a a' : SKI} (h : a ⇒ₚ a') : a ↠ a' := by +/-- The inclusion `⭢ₚ ⊆ ↠` -/ +theorem mRed_of_parallelReduction {a a' : SKI} (h : a ⭢ₚ a') : a ↠ a' := by cases h case refl => exact Relation.ReflTransGen.refl case par a a' b b' ha hb => @@ -74,8 +71,8 @@ theorem mRed_of_parallelReduction {a a' : SKI} (h : a ⇒ₚ a') : a ↠ a' := b case red_K b => exact Relation.ReflTransGen.single (red_K a' b) case red_S a b c => exact Relation.ReflTransGen.single (red_S a b c) -/-- The inclusion `⇒ ⊆ ⇒ₚ` -/ -theorem parallelReduction_of_red {a a' : SKI} (h : a ⭢ a') : a ⇒ₚ a' := by +/-- The inclusion `⭢ ⊆ ⭢ₚ` -/ +theorem parallelReduction_of_red {a a' : SKI} (h : a ⭢ a') : a ⭢ₚ a' := by cases h case red_S => apply ParallelReduction.red_S case red_K => apply ParallelReduction.red_K @@ -90,17 +87,17 @@ theorem parallelReduction_of_red {a a' : SKI} (h : a ⭢ a') : a ⇒ₚ a' := by · exact parallelReduction_of_red h /-- The inclusions of `mRed_of_parallelReduction` and -`parallelReduction_of_red` imply that `⇒` and `⇒ₚ` have the same reflexive-transitive +`parallelReduction_of_red` imply that `⭢` and `⭢ₚ` have the same reflexive-transitive closure. -/ theorem reflTransGen_parallelReduction_mRed : - Relation.ReflTransGen ParallelReduction = RedSKI.MRed := by + ReflTransGen ParallelReduction = ReflTransGen Red := by ext a b constructor - · apply Relation.reflTransGen_minimal + · apply Relation.reflTransGen_of_transitive_reflexive · exact fun _ => by rfl · exact Relation.transitive_reflTransGen · exact @mRed_of_parallelReduction - · apply Relation.reflTransGen_minimal + · apply Relation.reflTransGen_of_transitive_reflexive · exact Relation.reflexive_reflTransGen · exact Relation.transitive_reflTransGen · exact fun a a' h => Relation.ReflTransGen.single (parallelReduction_of_red h) @@ -111,139 +108,122 @@ Irreducibility for the (partially applied) primitive combinators. TODO: possibly these should be proven more generally (in another file) for `↠`. -/ -lemma I_irreducible (a : SKI) (h : I ⇒ₚ a) : a = I := by +lemma I_irreducible (a : SKI) (h : I ⭢ₚ a) : a = I := by cases h rfl -lemma K_irreducible (a : SKI) (h : K ⇒ₚ a) : a = K := by +lemma K_irreducible (a : SKI) (h : K ⭢ₚ a) : a = K := by cases h rfl -lemma Ka_irreducible (a c : SKI) (h : K ⬝ a ⇒ₚ c) : ∃ a', a ⇒ₚ a' ∧ c = K ⬝ a' := by +lemma Ka_irreducible (a c : SKI) (h : (K ⬝ a) ⭢ₚ c) : ∃ a', a ⭢ₚ a' ∧ c = K ⬝ a' := by cases h - case refl => - use a - exact ⟨ParallelReduction.refl a, rfl⟩ - case par b a' h h' => - use a' - rw [K_irreducible b h] - exact ⟨h', rfl⟩ - -lemma S_irreducible (a : SKI) (h : S ⇒ₚ a) : a = S := by + case refl => use a, .refl a + case par b a' h h' => rw [K_irreducible b h]; use a' + +lemma S_irreducible (a : SKI) (h : S ⭢ₚ a) : a = S := by cases h rfl -lemma Sa_irreducible (a c : SKI) (h : S ⬝ a ⇒ₚ c) : ∃ a', a ⇒ₚ a' ∧ c = S ⬝ a' := by +lemma Sa_irreducible (a c : SKI) (h : (S ⬝ a) ⭢ₚ c) : ∃ a', a ⭢ₚ a' ∧ c = S ⬝ a' := by cases h case refl => exact ⟨a, ParallelReduction.refl a, rfl⟩ - case par b a' h h' => - use a' - rw [S_irreducible b h] - exact ⟨h', rfl⟩ + case par b a' h h' => rw [S_irreducible b h]; use a' -lemma Sab_irreducible (a b c : SKI) (h : S ⬝ a ⬝ b ⇒ₚ c) : - ∃ a' b', a ⇒ₚ a' ∧ b ⇒ₚ b' ∧ c = S ⬝ a' ⬝ b' := by +lemma Sab_irreducible (a b c : SKI) (h : (S ⬝ a ⬝ b) ⭢ₚ c) : + ∃ a' b', a ⭢ₚ a' ∧ b ⭢ₚ b' ∧ c = S ⬝ a' ⬝ b' := by cases h - case refl => - use a; use b - exact ⟨ParallelReduction.refl a, ParallelReduction.refl b, rfl⟩ + case refl => use a, b, .refl a, .refl b case par c b' hc hb => let ⟨d, hd⟩ := Sa_irreducible a c hc rw [hd.2] - use d; use b' - exact ⟨hd.1, hb, rfl⟩ - + use d, b', hd.1 /-- -The key result: the Church-Rosser property holds for `⇒ₚ`. The proof is a lengthy case analysis -on the reductions `a ⇒ₚ a₁` and `a ⇒ₚ a₂`, but is entirely mechanical. +The key result: the Church-Rosser property holds for `⭢ₚ`. The proof is a lengthy case analysis +on the reductions `a ⭢ₚ a₁` and `a ⭢ₚ a₂`, but is entirely mechanical. -/ -theorem parallelReduction_diamond (a a₁ a₂ : SKI) (h₁ : a ⇒ₚ a₁) (h₂ : a ⇒ₚ a₂) : - Relation.Join ParallelReduction a₁ a₂ := by +theorem parallelReduction_diamond : Diamond ParallelReduction := by + intro a a₁ a₂ h₁ h₂ cases h₁ - case refl => exact ⟨a₂, h₂, ParallelReduction.refl a₂⟩ + case refl => exact ⟨a₂, h₂, .refl a₂⟩ case par a a' b b' ha' hb' => cases h₂ case refl => use a' ⬝ b' - exact ⟨ParallelReduction.refl (a' ⬝ b'), ParallelReduction.par ha' hb'⟩ + exact ⟨.refl (a' ⬝ b'), .par ha' hb'⟩ case par a'' b'' ha'' hb'' => - let ⟨a₃, ha⟩ := parallelReduction_diamond a a' a'' ha' ha'' - let ⟨b₃, hb⟩ := parallelReduction_diamond b b' b'' hb' hb'' + let ⟨a₃, ha⟩ := parallelReduction_diamond ha' ha'' + let ⟨b₃, hb⟩ := parallelReduction_diamond hb' hb'' use a₃ ⬝ b₃ constructor - · exact ParallelReduction.par ha.1 hb.1 - · exact ParallelReduction.par ha.2 hb.2 + · exact .par ha.1 hb.1 + · exact .par ha.2 hb.2 case red_I => rw [I_irreducible a' ha'] - use b' - exact ⟨ParallelReduction.red_I b', hb'⟩ + use b', .red_I b' case red_K => let ⟨a₂', ha₂'⟩ := Ka_irreducible a₂ a' ha' rw [ha₂'.2] use a₂' - exact ⟨ParallelReduction.red_K a₂' b', ha₂'.1⟩ + exact ⟨.red_K a₂' b', ha₂'.1⟩ case red_S a c => let ⟨a'', c', h⟩ := Sab_irreducible a c a' ha' rw [h.2.2] - use a'' ⬝ b' ⬝ (c' ⬝ b'), ParallelReduction.red_S a'' c' b' - apply ParallelReduction.par - · apply ParallelReduction.par - · exact h.1 - · exact hb' - · apply ParallelReduction.par - · exact h.2.1 - · exact hb' + use a'' ⬝ b' ⬝ (c' ⬝ b'), .red_S a'' c' b' + apply ParallelReduction.par <;> + apply ParallelReduction.par <;> + grind case red_I => cases h₂ - case refl => use a₁; exact ⟨ParallelReduction.refl a₁, ParallelReduction.red_I a₁⟩ + case refl => use a₁; exact ⟨.refl a₁, .red_I a₁⟩ case par c a₁' hc ha => rw [I_irreducible c hc] use a₁' - exact ⟨ha, ParallelReduction.red_I a₁'⟩ - case red_I => use a₁; exact ⟨ParallelReduction.refl a₁, ParallelReduction.refl a₁⟩ + exact ⟨ha, .red_I a₁'⟩ + case red_I => use a₁; exact ⟨.refl a₁, .refl a₁⟩ case red_K c => cases h₂ - case refl => use a₁; exact ⟨ParallelReduction.refl a₁, ParallelReduction.red_K a₁ c⟩ + case refl => use a₁; exact ⟨.refl a₁, .red_K a₁ c⟩ case par a' c' ha hc => let ⟨a₁', h'⟩ := Ka_irreducible a₁ a' ha rw [h'.2] use a₁' - exact ⟨h'.1, ParallelReduction.red_K a₁' c'⟩ + exact ⟨h'.1, .red_K a₁' c'⟩ case red_K => - use a₁; exact ⟨ParallelReduction.refl a₁, ParallelReduction.refl a₁⟩ + use a₁; exact ⟨.refl a₁, .refl a₁⟩ case red_S a b c => cases h₂ case refl => use a ⬝ c ⬝ (b ⬝ c) - exact ⟨ParallelReduction.refl _, ParallelReduction.red_S _ _ _⟩ + exact ⟨.refl _, .red_S ..⟩ case par d c' hd hc => let ⟨a', b', h⟩ := Sab_irreducible a b d hd rw [h.2.2] use a' ⬝ c' ⬝ (b' ⬝ c') constructor · apply ParallelReduction.par - · exact ParallelReduction.par h.left hc - · exact ParallelReduction.par h.2.1 hc - · exact ParallelReduction.red_S _ _ _ - case red_S => exact ⟨a ⬝ c ⬝ (b ⬝ c), ParallelReduction.refl _, ParallelReduction.refl _,⟩ + · exact .par h.left hc + · exact .par h.2.1 hc + · exact .red_S .. + case red_S => exact ⟨a ⬝ c ⬝ (b ⬝ c), .refl _, .refl _,⟩ theorem join_parallelReduction_equivalence : - Equivalence (Relation.Join (Relation.ReflTransGen ParallelReduction)) := - Confluent.equivalence_join_reflTransGen <| Diamond.toConfluent (parallelReduction_diamond _ _ _) + Equivalence (MJoin ParallelReduction) := + Confluent.equivalence_join_reflTransGen <| Diamond.toConfluent parallelReduction_diamond /-- The **Church-Rosser** theorem in its general form. -/ -theorem commonReduct_equivalence : Equivalence CommonReduct := by - unfold CommonReduct - rw [←reflTransGen_parallelReduction_mRed] +theorem mJoin_red_equivalence : Equivalence (MJoin Red) := by + rw [MJoin, ←reflTransGen_parallelReduction_mRed] exact join_parallelReduction_equivalence /-- The **Church-Rosser** theorem in the form it is usually stated. -/ -theorem MRed.diamond (a b c : SKI) (hab : a ↠ b) (hac : a ↠ c) : CommonReduct b c := by - apply commonReduct_equivalence.trans (y := a) - · exact commonReduct_equivalence.symm (commonReduct_of_single hab) - · exact commonReduct_of_single hac +theorem MRed.diamond : Confluent Red := by + intro a b c hab hac + apply mJoin_red_equivalence.trans (y := a) + · exact mJoin_red_equivalence.symm (MJoin.single hab) + · exact MJoin.single hac end SKI diff --git a/Cslib/Languages/CombinatoryLogic/Defs.lean b/Cslib/Languages/CombinatoryLogic/Defs.lean index b03f4fd56..2612d7451 100644 --- a/Cslib/Languages/CombinatoryLogic/Defs.lean +++ b/Cslib/Languages/CombinatoryLogic/Defs.lean @@ -6,7 +6,7 @@ Authors: Thomas Waring module -public meta import Cslib.Foundations.Semantics.ReductionSystem.Basic +public meta import Cslib.Foundations.Data.Relation @[expose] public section @@ -20,13 +20,12 @@ using the SKI basis. - `SKI`: the type of expressions in the SKI calculus, - `Red`: single-step reduction of SKI expressions, -- `MRed`: multi-step reduction of SKI expressions, -- `CommonReduct`: the relation between terms having a common reduct, +- `MRed`: multi-step reduction of SKI expressions. ## Notation - `⬝` : application between SKI terms, -- `⇒` : single-step reduction, +- `⭢` : single-step reduction, - `↠` : multi-step reduction, ## References @@ -72,7 +71,7 @@ def size : SKI → Nat /-! ### Reduction relations between SKI terms -/ /-- Single-step reduction of SKI terms -/ -@[reduction_sys RedSKI] +@[scoped grind, reduction_sys] inductive Red : SKI → SKI → Prop where /-- The operational semantics of the `S`, -/ | red_S (x y z : SKI) : Red (S ⬝ x ⬝ y ⬝ z) (x ⬝ z ⬝ (y ⬝ z)) @@ -86,7 +85,7 @@ inductive Red : SKI → SKI → Prop where | red_tail (x y y' : SKI) (_ : Red y y') : Red (x ⬝ y) (x ⬝ y') -open Red ReductionSystem +open Red Relation lemma Red.ne {x y : SKI} : (x ⭢ y) → x ≠ y | red_S _ _ _, h => by cases h @@ -95,49 +94,27 @@ lemma Red.ne {x y : SKI} : (x ⭢ y) → x ≠ y | red_head _ _ _ h', h => Red.ne h' (SKI.app.inj h).1 | red_tail _ _ _ h', h => Red.ne h' (SKI.app.inj h).2 -theorem MRed.S (x y z : SKI) : (S ⬝ x ⬝ y ⬝ z) ↠ (x ⬝ z ⬝ (y ⬝ z)) := MRed.single RedSKI <| red_S .. -theorem MRed.K (x y : SKI) : (K ⬝ x ⬝ y) ↠ x := MRed.single RedSKI <| red_K .. -theorem MRed.I (x : SKI) : (I ⬝ x) ↠ x := MRed.single RedSKI <| red_I .. +theorem MRed.S (x y z : SKI) : (S ⬝ x ⬝ y ⬝ z) ↠ (x ⬝ z ⬝ (y ⬝ z)) := .single <| red_S .. +theorem MRed.K (x y : SKI) : (K ⬝ x ⬝ y) ↠ x := .single <| red_K .. +theorem MRed.I (x : SKI) : (I ⬝ x) ↠ x := .single <| red_I .. theorem MRed.head {a a' : SKI} (b : SKI) (h : a ↠ a') : (a ⬝ b) ↠ (a' ⬝ b) := by - induction h with - | refl => apply MRed.refl - | @tail a' a'' _ ha'' ih => - apply Relation.ReflTransGen.tail (b := a' ⬝ b) ih - exact Red.red_head a' a'' b ha'' + induction h <;> grind theorem MRed.tail (a : SKI) {b b' : SKI} (h : b ↠ b') : (a ⬝ b) ↠ (a ⬝ b') := by - induction h with - | refl => apply MRed.refl - | @tail b' b'' _ hb'' ih => - apply Relation.ReflTransGen.tail (b := a ⬝ b') ih - exact Red.red_tail a b' b'' hb'' + induction h <;> grind lemma parallel_mRed {a a' b b' : SKI} (ha : a ↠ a') (hb : b ↠ b') : (a ⬝ b) ↠ (a' ⬝ b') := Trans.simple (MRed.head b ha) (MRed.tail a' hb) lemma parallel_red {a a' b b' : SKI} (ha : a ⭢ a') (hb : b ⭢ b') : (a ⬝ b) ↠ (a' ⬝ b') := by - trans a' ⬝ b - all_goals apply MRed.single - · exact Red.red_head a a' b ha - · exact Red.red_tail a' b b' hb + trans a' ⬝ b <;> grind -/-- Express that two terms have a reduce to a common term. -/ -def CommonReduct : SKI → SKI → Prop := Relation.Join RedSKI.MRed - -lemma commonReduct_of_single {a b : SKI} (h : a ↠ b) : CommonReduct a b := ⟨b, h, by rfl⟩ - -theorem symmetric_commonReduct : Symmetric CommonReduct := Relation.symmetric_join - -theorem reflexive_commonReduct : Reflexive CommonReduct := by - intro x - use x - -theorem commonReduct_head {x x' : SKI} (y : SKI) : CommonReduct x x' → CommonReduct (x ⬝ y) (x' ⬝ y) +theorem mJoin_red_head {x x' : SKI} (y : SKI) : MJoin Red x x' → MJoin Red (x ⬝ y) (x' ⬝ y) | ⟨z, hz, hz'⟩ => ⟨z ⬝ y, MRed.head y hz, MRed.head y hz'⟩ -theorem commonReduct_tail (x : SKI) {y y' : SKI} : CommonReduct y y' → CommonReduct (x ⬝ y) (x ⬝ y') +theorem mJoin_red_tail (x : SKI) {y y' : SKI} : MJoin Red y y' → MJoin Red (x ⬝ y) (x ⬝ y') | ⟨z, hz, hz'⟩ => ⟨x ⬝ z, MRed.tail x hz, MRed.tail x hz'⟩ end SKI diff --git a/Cslib/Languages/CombinatoryLogic/Evaluation.lean b/Cslib/Languages/CombinatoryLogic/Evaluation.lean index cbc48402b..299c748d4 100644 --- a/Cslib/Languages/CombinatoryLogic/Evaluation.lean +++ b/Cslib/Languages/CombinatoryLogic/Evaluation.lean @@ -39,7 +39,7 @@ namespace Cslib namespace SKI -open Red +open Red Relation /-- The predicate that a term has no reducible sub-terms. -/ def RedexFree : SKI → Prop @@ -131,22 +131,25 @@ theorem evalStep_right_correct : (x y : SKI) → (x.evalStep = Sum.inr y) → x rw [←h] exact red_head _ _ _ <| evalStep_right_correct _ _ habcd -theorem redexFree_of_no_red {x : SKI} (h : ∀ y, ¬ (x ⭢ y)) : x.RedexFree := by +theorem redexFree_of_normal_red {x : SKI} (h : Normal Red x) : x.RedexFree := by match hx : x.evalStep with | Sum.inl h' => exact h'.down - | Sum.inr y => cases h _ (evalStep_right_correct x y hx) - -theorem RedexFree.no_red : {x : SKI} → x.RedexFree → ∀ y, ¬ (x ⭢ y) -| S ⬝ x, hx, S ⬝ y, red_tail _ _ _ hx' => by rw [RedexFree] at hx; exact hx.no_red y hx' -| K ⬝ x, hx, K ⬝ y, red_tail _ _ _ hx' => by rw [RedexFree] at hx; exact hx.no_red y hx' -| S ⬝ _ ⬝ _, ⟨hx, _⟩, S ⬝ _ ⬝ _, red_head _ _ _ (red_tail _ _ _ h3) => hx.no_red _ h3 -| S ⬝ _ ⬝ _, ⟨_, hy⟩, S ⬝ _ ⬝ _, red_tail _ _ _ h3 => hy.no_red _ h3 -| _ ⬝ _ ⬝ _ ⬝ _ ⬝ _, ⟨hx, _⟩, _ ⬝ _, red_head _ _ _ hq => hx.no_red _ hq -| _ ⬝ _ ⬝ _ ⬝ _ ⬝ _, ⟨_, hy⟩, _ ⬝ _, red_tail _ _ _ he => hy.no_red _ he + | Sum.inr y => rw [Normal_iff] at h; cases h _ (evalStep_right_correct x y hx) + +theorem RedexFree.normal_red {x : SKI} (hx : x.RedexFree) : Normal Red x := by + simp_rw [Normal_iff] + intro y hy + match x, hx, y, hy with + | S ⬝ x, hx, S ⬝ y, red_tail _ _ _ hx' => rw [RedexFree] at hx; exact hx.normal_red ⟨_, hx'⟩ + | K ⬝ x, hx, K ⬝ y, red_tail _ _ _ hx' => rw [RedexFree] at hx; exact hx.normal_red ⟨_, hx'⟩ + | S ⬝ _ ⬝ _, ⟨hx, _⟩, S ⬝ _ ⬝ _, red_head _ _ _ (red_tail _ _ _ h3) => exact hx.normal_red ⟨_, h3⟩ + | S ⬝ _ ⬝ _, ⟨_, hy⟩, S ⬝ _ ⬝ _, red_tail _ _ _ h3 => exact hy.normal_red ⟨_, h3⟩ + | _ ⬝ _ ⬝ _ ⬝ _ ⬝ _, ⟨hx, _⟩, _ ⬝ _, red_head _ _ _ hq => exact hx.normal_red ⟨_, hq⟩ + | _ ⬝ _ ⬝ _ ⬝ _ ⬝ _, ⟨_, hy⟩, _ ⬝ _, red_tail _ _ _ he => exact hy.normal_red ⟨_, he⟩ /-- A term is redex free iff it has no one-step reductions. -/ -theorem redexFree_iff {x : SKI} : x.RedexFree ↔ ∀ y, ¬ (x ⭢ y) := - ⟨RedexFree.no_red, redexFree_of_no_red⟩ +theorem redexFree_iff {x : SKI} : x.RedexFree ↔ Normal Red x := + ⟨RedexFree.normal_red, redexFree_of_normal_red⟩ theorem redexFree_iff_evalStep {x : SKI} : x.RedexFree ↔ (x.evalStep).isLeft = true := by constructor @@ -154,7 +157,7 @@ theorem redexFree_iff_evalStep {x : SKI} : x.RedexFree ↔ (x.evalStep).isLeft = intro h match hx : x.evalStep with | Sum.inl h' => exact rfl - | Sum.inr y => cases h.no_red _ (evalStep_right_correct _ _ hx) + | Sum.inr y => cases h.normal_red ⟨_, (evalStep_right_correct _ _ hx)⟩ case mpr => intro h match hx : x.evalStep with @@ -166,34 +169,22 @@ instance : DecidablePred RedexFree := fun _ => decidable_of_iff' _ redexFree_iff /-- A term is redex free iff its only many-step reduction is itself. -/ theorem redexFree_iff_mred_eq {x : SKI} : x.RedexFree ↔ ∀ y, (x ↠ y) ↔ x = y := by constructor - case mp => - intro h y - constructor - case mp => - intro h' - cases h'.cases_head - case inl => assumption - case inr h' => - obtain ⟨z, hz, _⟩ := h' - cases h.no_red _ hz - case mpr => - intro h - rw [h] + case mp => grind [RedexFree.normal_red] case mpr => intro h rw [redexFree_iff] - intro y hy + intro ⟨y, hy⟩ specialize h y exact Red.ne hy (h.1 (Relation.ReflTransGen.single hy)) /-- If a term has a common reduct with a normal term, it in fact reduces to that term. -/ -theorem commonReduct_redexFree {x y : SKI} (hy : y.RedexFree) (h : CommonReduct x y) : x ↠ y := +theorem mJoin_red_redexFree {x y : SKI} (hy : y.RedexFree) (h : MJoin Red x y) : x ↠ y := let ⟨w, hyw, hzw⟩ := h (redexFree_iff_mred_eq.1 hy _ |>.1 hzw : y = w) ▸ hyw /-- If `x` reduces to both `y` and `z`, and `z` is not reducible, then `y` reduces to `z`. -/ lemma confluent_redexFree {x y z : SKI} (hxy : x ↠ y) (hxz : x ↠ z) (hz : RedexFree z) : y ↠ z := - let ⟨w, hyw, hzw⟩ := MRed.diamond x y z hxy hxz + let ⟨w, hyw, hzw⟩ := MRed.diamond hxy hxz (redexFree_iff_mred_eq.1 hz _ |>.1 hzw : z = w) ▸ hyw /-- @@ -204,13 +195,13 @@ lemma unique_normal_form {x y z : SKI} (redexFree_iff_mred_eq.1 hy _).1 (confluent_redexFree hxy hxz hz) /-- If `x` and `y` are normal and have a common reduct, then they are equal. -/ -lemma eq_of_commonReduct_redexFree {x y : SKI} (h : CommonReduct x y) +lemma eq_of_mJoin_red_redexFree {x y : SKI} (h : MJoin Red x y) (hx : x.RedexFree) (hy : y.RedexFree) : x = y := - (redexFree_iff_mred_eq.1 hx _).1 (commonReduct_redexFree hy h) + (redexFree_iff_mred_eq.1 hx _).1 (mJoin_red_redexFree hy h) /-! ### Injectivity for datatypes -/ -lemma sk_nequiv : ¬ CommonReduct S K := by +lemma sk_nequiv : ¬ MJoin Red S K := by intro ⟨z, hsz, hkz⟩ have hS : RedexFree S := by simp [RedexFree] have hK : RedexFree K := by simp [RedexFree] @@ -219,33 +210,19 @@ lemma sk_nequiv : ¬ CommonReduct S K := by /-- Injectivity for booleans. -/ theorem isBool_injective (x y : SKI) (u v : Bool) (hx : IsBool u x) (hy : IsBool v y) - (hxy : CommonReduct x y) : u = v := by - have h : CommonReduct (if u then S else K) (if v then S else K) := by - apply commonReduct_equivalence.trans (y := x ⬝ S ⬝ K) - · apply commonReduct_equivalence.symm - apply commonReduct_of_single + (hxy : MJoin Red x y) : u = v := by + have h : MJoin Red (if u then S else K) (if v then S else K) := by + apply mJoin_red_equivalence.trans (y := x ⬝ S ⬝ K) + · apply mJoin_red_equivalence.symm + apply Relation.MJoin.single exact hx S K - · apply commonReduct_equivalence.trans (y := y ⬝ S ⬝ K) - · exact commonReduct_head K <| commonReduct_head S hxy - · apply commonReduct_of_single + · apply mJoin_red_equivalence.trans (y := y ⬝ S ⬝ K) + · exact mJoin_red_head K <| mJoin_red_head S hxy + · apply Relation.MJoin.single exact hy S K - by_cases u - case pos hu => - by_cases v - case pos hv => - rw [hu, hv] - case neg hv => - simp_rw [hu, hv, Bool.false_eq_true, reduceIte] at h - exact False.elim <| sk_nequiv h - case neg hu => - by_cases v - case pos hv => - simp_rw [hu, hv, Bool.false_eq_true, reduceIte] at h - exact False.elim <| sk_nequiv (commonReduct_equivalence.symm h) - case neg hv => - simp_rw [hu, hv] - -lemma TF_nequiv : ¬ CommonReduct TT FF := fun h => + grind [sk_nequiv, mJoin_red_equivalence.symm h] + +lemma TF_nequiv : ¬ MJoin Red TT FF := fun h => (Bool.eq_not_self true).mp <| isBool_injective TT FF true false TT_correct FF_correct h /-- A specialisation of `Church : Nat → SKI`. -/ @@ -271,17 +248,17 @@ lemma churchK_injective : Function.Injective churchK := /-- Injectivity for Church numerals -/ theorem isChurch_injective (x y : SKI) (n m : Nat) (hx : IsChurch n x) (hy : IsChurch m y) - (hxy : CommonReduct x y) : n = m := by - suffices CommonReduct (churchK n) (churchK m) by + (hxy : MJoin Red x y) : n = m := by + suffices MJoin Red (churchK n) (churchK m) by apply churchK_injective - exact eq_of_commonReduct_redexFree this (churchK_redexFree n) (churchK_redexFree m) - apply commonReduct_equivalence.trans (y := x ⬝ K ⬝ K) + exact eq_of_mJoin_red_redexFree this (churchK_redexFree n) (churchK_redexFree m) + apply mJoin_red_equivalence.trans (y := x ⬝ K ⬝ K) · simp_rw [churchK_church] - exact commonReduct_equivalence.symm <| commonReduct_of_single (hx K K) - · apply commonReduct_equivalence.trans (y := y ⬝ K ⬝ K) - · apply commonReduct_head; apply commonReduct_head; assumption + exact mJoin_red_equivalence.symm <| Relation.MJoin.single (hx K K) + · apply mJoin_red_equivalence.trans (y := y ⬝ K ⬝ K) + · apply mJoin_red_head; apply mJoin_red_head; assumption · simp_rw [churchK_church] - exact commonReduct_of_single (hy K K) + exact Relation.MJoin.single (hy K K) /-- **Rice's theorem**: no SKI term is a non-trivial predicate. @@ -308,7 +285,7 @@ theorem rice {P : SKI} (hP : ∀ x : SKI, ((P ⬝ x) ↠ TT) ∨ (P ⬝ x) ↠ F _ ↠ P ⬝ (TT ⬝ b ⬝ a) := by apply MRed.tail; apply MRed.head; apply MRed.head; exact h _ ↠ P ⬝ b := by apply MRed.tail; apply TT_correct _ ↠ FF := hb - exact TF_nequiv <| MRed.diamond _ _ _ h this + exact TF_nequiv <| MRed.diamond h this case inr h => have : (P ⬝ Abs) ↠ TT := calc _ ↠ P ⬝ (Neg ⬝ Abs) := by apply MRed.tail; apply fixedPoint_correct @@ -316,7 +293,7 @@ theorem rice {P : SKI} (hP : ∀ x : SKI, ((P ⬝ x) ↠ TT) ∨ (P ⬝ x) ↠ F _ ↠ P ⬝ (FF ⬝ b ⬝ a) := by apply MRed.tail; apply MRed.head; apply MRed.head; exact h _ ↠ P ⬝ a := by apply MRed.tail; apply FF_correct _ ↠ TT := ha - exact TF_nequiv <| MRed.diamond _ _ _ this h + exact TF_nequiv <| MRed.diamond this h /-- **Rice's theorem**: any SKI predicate is trivial. diff --git a/Cslib/Languages/CombinatoryLogic/List.lean b/Cslib/Languages/CombinatoryLogic/List.lean new file mode 100644 index 000000000..c71721da9 --- /dev/null +++ b/Cslib/Languages/CombinatoryLogic/List.lean @@ -0,0 +1,279 @@ +/- +Copyright (c) 2026 Jesse Alama. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Jesse Alama +-/ + +module + +public import Cslib.Languages.CombinatoryLogic.Recursion + +@[expose] public section + +/-! +# Church-Encoded Lists in SKI Combinatory Logic + +Church-encoded lists for proving SKI ≃ TM equivalence. A list is encoded as +`λ c n. c a₀ (c a₁ (... (c aₖ n)...))` where each `aᵢ` is a Church numeral. +-/ + +namespace Cslib + +namespace SKI + +open Red MRed + +/-! ### Church List Representation -/ + +/-- A term correctly Church-encodes a list of natural numbers. -/ +def IsChurchList : List ℕ → SKI → Prop + | [], cns => ∀ c n : SKI, (cns ⬝ c ⬝ n) ↠ n + | x :: xs, cns => ∀ c n : SKI, + ∃ cx cxs : SKI, IsChurch x cx ∧ IsChurchList xs cxs ∧ + (cns ⬝ c ⬝ n) ↠ c ⬝ cx ⬝ (cxs ⬝ c ⬝ n) + +/-- `IsChurchList` is preserved under multi-step reduction of the term. -/ +theorem isChurchList_trans {ns : List ℕ} {cns cns' : SKI} (h : cns ↠ cns') + (hcns' : IsChurchList ns cns') : IsChurchList ns cns := by + match ns with + | [] => + intro c n + exact Trans.trans (parallel_mRed (MRed.head c h) .refl) (hcns' c n) + | x :: xs => + intro c n + obtain ⟨cx, cxs, hcx, hcxs, hred⟩ := hcns' c n + exact ⟨cx, cxs, hcx, hcxs, Trans.trans (parallel_mRed (MRed.head c h) .refl) hred⟩ + +/-- Both components of a pair are Church lists. -/ +structure IsChurchListPair (prev curr : List ℕ) (p : SKI) : Prop where + fst : IsChurchList prev (Fst ⬝ p) + snd : IsChurchList curr (Snd ⬝ p) + +/-- IsChurchListPair is preserved under reduction. -/ +@[scoped grind →] +theorem isChurchListPair_trans {prev curr : List ℕ} {p p' : SKI} (hp : p ↠ p') + (hp' : IsChurchListPair prev curr p') : IsChurchListPair prev curr p := by + constructor + · apply isChurchList_trans (MRed.tail Fst hp) + exact hp'.1 + · apply isChurchList_trans (MRed.tail Snd hp) + exact hp'.2 + +namespace List + +/-! ### Nil: The empty list -/ + +/-- nil = λ c n. n -/ +def NilPoly : SKI.Polynomial 2 := &1 + +/-- The SKI term for the empty list. -/ +def Nil : SKI := NilPoly.toSKI + +/-- Reduction: `Nil ⬝ c ⬝ n ↠ n`. -/ +theorem nil_def (c n : SKI) : (Nil ⬝ c ⬝ n) ↠ n := + NilPoly.toSKI_correct [c, n] (by simp) + +/-- The empty list term correctly represents `[]`. -/ +theorem nil_correct : IsChurchList [] Nil := nil_def + +/-! ### Cons: Consing an element onto a list -/ + +/-- cons = λ x xs c n. c x (xs c n) -/ +def ConsPoly : SKI.Polynomial 4 := &2 ⬝' &0 ⬝' (&1 ⬝' &2 ⬝' &3) + +/-- The SKI term for list cons. -/ +def Cons : SKI := ConsPoly.toSKI + +/-- Reduction: `Cons ⬝ x ⬝ xs ⬝ c ⬝ n ↠ c ⬝ x ⬝ (xs ⬝ c ⬝ n)`. -/ +theorem cons_def (x xs c n : SKI) : + (Cons ⬝ x ⬝ xs ⬝ c ⬝ n) ↠ c ⬝ x ⬝ (xs ⬝ c ⬝ n) := + ConsPoly.toSKI_correct [x, xs, c, n] (by simp) + +/-- Cons preserves Church list representation. -/ +theorem cons_correct {x : ℕ} {xs : List ℕ} {cx cxs : SKI} + (hcx : IsChurch x cx) (hcxs : IsChurchList xs cxs) : + IsChurchList (x :: xs) (Cons ⬝ cx ⬝ cxs) := by + intro c n + use cx, cxs, hcx, hcxs + exact cons_def cx cxs c n + +/-- Singleton list correctness. -/ +theorem singleton_correct {x : ℕ} {cx : SKI} (hcx : IsChurch x cx) : + IsChurchList [x] (Cons ⬝ cx ⬝ Nil) := + cons_correct hcx nil_correct + +/-- The canonical SKI term for a Church-encoded list. -/ +def toChurch : List ℕ → SKI + | [] => Nil + | x :: xs => Cons ⬝ (SKI.toChurch x) ⬝ (toChurch xs) + +/-- `toChurch [] = Nil`. -/ +@[simp] +lemma toChurch_nil : toChurch [] = Nil := rfl + +/-- `toChurch (x :: xs) = Cons ⬝ SKI.toChurch x ⬝ toChurch xs`. -/ +@[simp] +lemma toChurch_cons (x : ℕ) (xs : List ℕ) : + toChurch (x :: xs) = Cons ⬝ (SKI.toChurch x) ⬝ (toChurch xs) := rfl + +/-- `toChurch ns` correctly represents `ns`. -/ +theorem toChurch_correct (ns : List ℕ) : IsChurchList ns (toChurch ns) := by + induction ns with + | nil => exact nil_correct + | cons x xs ih => exact cons_correct (SKI.toChurch_correct x) ih + +/-! ### Head: Extract the head of a list -/ + +/-- headD d xs = xs K d (returns d for empty list) -/ +def HeadDPoly : SKI.Polynomial 2 := &1 ⬝' K ⬝' &0 + +/-- The SKI term for list head with default. -/ +def HeadD : SKI := HeadDPoly.toSKI + +/-- Reduction: `HeadD ⬝ d ⬝ xs ↠ xs ⬝ K ⬝ d`. -/ +theorem headD_def (d xs : SKI) : (HeadD ⬝ d ⬝ xs) ↠ xs ⬝ K ⬝ d := + HeadDPoly.toSKI_correct [d, xs] (by simp) + +/-- General head-with-default correctness. -/ +theorem headD_correct {d : ℕ} {cd : SKI} (hcd : IsChurch d cd) + {ns : List ℕ} {cns : SKI} (hcns : IsChurchList ns cns) : + IsChurch (ns.headD d) (HeadD ⬝ cd ⬝ cns) := by + match ns with + | [] => + simp only [List.headD_nil] + apply isChurch_trans d (headD_def cd cns) + apply isChurch_trans d (hcns K cd) + exact hcd + | x :: xs => + simp only [List.headD_cons] + apply isChurch_trans x (headD_def cd cns) + obtain ⟨cx, cxs, hcx, _, hred⟩ := hcns K cd + exact isChurch_trans x hred (isChurch_trans x (MRed.K cx _) hcx) + +/-- The SKI term for list head (default 0). -/ +def Head : SKI := HeadD ⬝ SKI.Zero + +/-- Reduction: `Head ⬝ xs ↠ xs ⬝ K ⬝ Zero`. -/ +theorem head_def (xs : SKI) : (Head ⬝ xs) ↠ xs ⬝ K ⬝ SKI.Zero := + headD_def SKI.Zero xs + +/-- Head correctness (default 0). -/ +theorem head_correct (ns : List ℕ) (cns : SKI) (hcns : IsChurchList ns cns) : + IsChurch (ns.headD 0) (Head ⬝ cns) := + headD_correct zero_correct hcns + +/-! ### Tail: Extract the tail of a list -/ + +/-- Step function for tail: (prev, curr) → (curr, cons h curr) -/ +def TailStepPoly : SKI.Polynomial 2 := + MkPair ⬝' (Snd ⬝' &1) ⬝' (Cons ⬝' &0 ⬝' (Snd ⬝' &1)) + +/-- The step function for computing list tail. -/ +def TailStep : SKI := TailStepPoly.toSKI + +/-- Reduction of the tail step function. -/ +theorem tailStep_def (h p : SKI) : + (TailStep ⬝ h ⬝ p) ↠ MkPair ⬝ (Snd ⬝ p) ⬝ (Cons ⬝ h ⬝ (Snd ⬝ p)) := + TailStepPoly.toSKI_correct [h, p] (by simp) + +/-- tail xs = Fst (xs TailStep (MkPair Nil Nil)) -/ +def TailPoly : SKI.Polynomial 1 := + Fst ⬝' (&0 ⬝' TailStep ⬝' (MkPair ⬝ Nil ⬝ Nil)) + +/-- The tail of a Church-encoded list. -/ +def Tail : SKI := TailPoly.toSKI + +/-- Reduction: `Tail ⬝ xs ↠ Fst ⬝ (xs ⬝ TailStep ⬝ (MkPair ⬝ Nil ⬝ Nil))`. -/ +theorem tail_def (xs : SKI) : + (Tail ⬝ xs) ↠ Fst ⬝ (xs ⬝ TailStep ⬝ (MkPair ⬝ Nil ⬝ Nil)) := + TailPoly.toSKI_correct [xs] (by simp) + +/-- The initial pair (nil, nil) satisfies the invariant. -/ +@[simp] +theorem tail_init : IsChurchListPair [] [] (MkPair ⬝ Nil ⬝ Nil) := by + constructor + · apply isChurchList_trans (fst_correct _ _); exact nil_correct + · apply isChurchList_trans (snd_correct _ _); exact nil_correct + +/-- The step function preserves the tail-computing invariant. -/ +theorem tailStep_correct {x : ℕ} {xs : List ℕ} {cx p : SKI} + (hcx : IsChurch x cx) (hp : IsChurchListPair xs.tail xs p) : + IsChurchListPair xs (x :: xs) (TailStep ⬝ cx ⬝ p) := by + apply isChurchListPair_trans (tailStep_def cx p) + exact ⟨isChurchList_trans (fst_correct _ _) hp.2, + isChurchList_trans (snd_correct _ _) (cons_correct hcx hp.2)⟩ + +theorem tailFold_correct (ns : List ℕ) (cns : SKI) (hcns : IsChurchList ns cns) : + ∃ p, (cns ⬝ TailStep ⬝ (MkPair ⬝ Nil ⬝ Nil)) ↠ p ∧ + IsChurchListPair ns.tail ns p := by + induction ns generalizing cns with + | nil => + -- For empty list, the fold returns the initial pair + use MkPair ⬝ Nil ⬝ Nil + constructor + · exact hcns TailStep (MkPair ⬝ Nil ⬝ Nil) + · exact tail_init + | cons x xs ih => + -- For x :: xs, first fold xs, then apply step + -- cns ⬝ TailStep ⬝ init ↠ TailStep ⬝ cx ⬝ (cxs ⬝ TailStep ⬝ init) + -- Get the Church representations for x and xs + obtain ⟨cx, cxs, hcx, hcxs, hred⟩ := hcns TailStep (MkPair ⬝ Nil ⬝ Nil) + -- By IH, folding xs gives a pair representing (xs.tail, xs) + obtain ⟨p_xs, hp_xs_red, hp_xs_pair⟩ := ih cxs hcxs + -- After step, we get a pair representing (xs, x :: xs) + have hstep := tailStep_correct hcx hp_xs_pair + -- The full fold: cns ⬝ TailStep ⬝ init ↠ TailStep ⬝ cx ⬝ (cxs ⬝ TailStep ⬝ init) + -- ↠ TailStep ⬝ cx ⬝ p_xs + use TailStep ⬝ cx ⬝ p_xs + constructor + · exact Trans.trans hred (MRed.tail _ hp_xs_red) + · exact hstep + +/-- Tail correctness. -/ +theorem tail_correct (ns : List ℕ) (cns : SKI) (hcns : IsChurchList ns cns) : + IsChurchList ns.tail (Tail ⬝ cns) := by + -- Tail ⬝ cns ↠ Fst ⬝ (cns ⬝ TailStep ⬝ (MkPair ⬝ Nil ⬝ Nil)) + apply isChurchList_trans (tail_def cns) + -- Get the fold result + obtain ⟨p, hp_red, hp_pair⟩ := tailFold_correct ns cns hcns + -- Fst ⬝ (cns ⬝ TailStep ⬝ init) ↠ Fst ⬝ p + apply isChurchList_trans (MRed.tail Fst hp_red) + -- Fst ⬝ p represents ns.tail (from hp_pair) + exact hp_pair.1 + +/-! ### Prepending zero to a list (for Code.zero') -/ + +/-- PrependZero xs = cons 0 xs = Cons ⬝ Zero ⬝ xs -/ +def PrependZeroPoly : SKI.Polynomial 1 := Cons ⬝' SKI.Zero ⬝' &0 + +/-- Prepend zero to a Church-encoded list. -/ +def PrependZero : SKI := PrependZeroPoly.toSKI + +/-- Reduction: `PrependZero ⬝ xs ↠ Cons ⬝ Zero ⬝ xs`. -/ +theorem prependZero_def (xs : SKI) : (PrependZero ⬝ xs) ↠ Cons ⬝ SKI.Zero ⬝ xs := + PrependZeroPoly.toSKI_correct [xs] (by simp) + +/-- Prepending zero preserves Church list representation. -/ +theorem prependZero_correct {ns : List ℕ} {cns : SKI} (hcns : IsChurchList ns cns) : + IsChurchList (0 :: ns) (PrependZero ⬝ cns) := by + apply isChurchList_trans (prependZero_def cns) + exact cons_correct zero_correct hcns + +/-! ### Successor on list head (for Code.succ) -/ + +/-- SuccHead xs = cons (succ (head xs)) nil -/ +def SuccHead : SKI := B ⬝ (C ⬝ Cons ⬝ Nil) ⬝ (B ⬝ SKI.Succ ⬝ Head) + +/-- `SuccHead` correctly computes a singleton containing `succ(head ns)`. -/ +theorem succHead_correct (ns : List ℕ) (cns : SKI) (hcns : IsChurchList ns cns) : + IsChurchList [ns.headD 0 + 1] (SuccHead ⬝ cns) := by + have hhead := head_correct ns cns hcns + have hsucc := succ_correct (ns.headD 0) (Head ⬝ cns) hhead + apply isChurchList_trans (.trans (B_tail_mred _ _ _ _ (B_def .Succ Head cns)) (C_def Cons Nil _)) + exact cons_correct hsucc nil_correct + +end List + +end SKI + +end Cslib diff --git a/Cslib/Languages/CombinatoryLogic/Recursion.lean b/Cslib/Languages/CombinatoryLogic/Recursion.lean index 1b563ba0a..b9128f09f 100644 --- a/Cslib/Languages/CombinatoryLogic/Recursion.lean +++ b/Cslib/Languages/CombinatoryLogic/Recursion.lean @@ -55,7 +55,7 @@ namespace Cslib namespace SKI -open Red MRed ReductionSystem +open Red MRed /-- Function form of the church numerals. -/ def Church (n : Nat) (f x : SKI) : SKI := @@ -63,6 +63,9 @@ match n with | 0 => x | n+1 => f ⬝ (Church n f x) +@[simp] lemma Church_zero (f x : SKI) : Church 0 f x = x := rfl +@[simp] lemma Church_succ (n : Nat) (f x : SKI) : Church (n+1) f x = f ⬝ Church n f x := rfl + /-- `church` commutes with reduction. -/ lemma church_red (n : Nat) (f f' x x' : SKI) (hf : f ↠ f') (hx : x ↠ x') : Church n f x ↠ Church n f' x' := by @@ -75,7 +78,8 @@ def IsChurch (n : Nat) (a : SKI) : Prop := ∀ f x :SKI, (a ⬝ f ⬝ x) ↠ (Church n f x) /-- To show `IsChurch n a` it suffices to show the same for a reduct of `a`. -/ -theorem isChurch_trans (n : Nat) {a a' : SKI} (h : a ↠ a') : IsChurch n a' → IsChurch n a := by +theorem isChurch_trans (n : Nat) {a a' : SKI} (h : a ↠ a') : + IsChurch n a' → IsChurch n a := by simp_rw [IsChurch] intro ha' f x calc @@ -87,6 +91,7 @@ theorem isChurch_trans (n : Nat) {a a' : SKI} (h : a ↠ a') : IsChurch n a' → /-- Church zero := λ f x. x -/ protected def Zero : SKI := K ⬝ I +@[scoped grind .] theorem zero_correct : IsChurch 0 SKI.Zero := by unfold IsChurch SKI.Zero Church intro f x @@ -96,20 +101,40 @@ theorem zero_correct : IsChurch 0 SKI.Zero := by /-- Church one := λ f x. f x -/ protected def One : SKI := I +@[scoped grind .] theorem one_correct : IsChurch 1 SKI.One := by intro f x apply head - exact MRed.single RedSKI (red_I f) + exact .single (red_I f) /-- Church succ := λ a f x. f (a f x) ~ λ a f. B f (a f) ~ λ a. S B a ~ S B -/ protected def Succ : SKI := S ⬝ B -theorem succ_correct (n : Nat) (a : SKI) (h : IsChurch n a) : IsChurch (n+1) (SKI.Succ ⬝ a) := by +@[scoped grind →] +theorem succ_correct (n : Nat) (a : SKI) (h : IsChurch n a) : + IsChurch (n+1) (SKI.Succ ⬝ a) := by intro f x calc _ ⭢ B ⬝ f ⬝ (a ⬝ f) ⬝ x := by apply red_head; apply red_S _ ↠ f ⬝ (a ⬝ f ⬝ x) := by apply B_def _ ↠ f ⬝ (Church n f x) := by apply MRed.tail; exact h f x +/-- Build the canonical SKI Church numeral for `n`. -/ +def toChurch : ℕ → SKI + | 0 => SKI.Zero + | n + 1 => SKI.Succ ⬝ (toChurch n) + +/-- `toChurch 0 = Zero`. -/ +@[simp] lemma toChurch_zero : toChurch 0 = SKI.Zero := rfl +/-- `toChurch (n + 1) = Succ ⬝ toChurch n`. -/ +@[simp] lemma toChurch_succ (n : ℕ) : toChurch (n + 1) = SKI.Succ ⬝ (toChurch n) := rfl + +/-- `toChurch n` correctly represents `n`. -/ +@[scoped grind .] +theorem toChurch_correct (n : ℕ) : IsChurch n (toChurch n) := by + induction n with + | zero => exact zero_correct + | succ n ih => exact succ_correct n (toChurch n) ih + /-- To define the predecessor, iterate the function `PredAux` ⟨i, j⟩ ↦ ⟨j, j+1⟩ on ⟨0,0⟩, then take the first component. @@ -155,7 +180,7 @@ theorem predAux_correct' (n : Nat) : · exact fst_correct _ _ · exact snd_correct _ _ | succ n ih => - simp_rw [Church] + simp_rw [Church_succ] apply predAux_correct (ns := ⟨n.pred, n⟩) (h := ih) /-- Predecessor := λ n. Fst ⬝ (n ⬝ PredAux ⬝ (MkPair ⬝ Zero ⬝ Zero)) -/ @@ -279,23 +304,17 @@ theorem RFindAbove_correct (fNat : Nat → Nat) (f x : SKI) induction n generalizing m x all_goals apply isChurch_trans (a' := RFindAboveAux ⬝ RFindAbove ⬝ x ⬝ f) case zero.a => - apply isChurch_trans (a' := x) - · have : IsChurch (fNat m) (f ⬝ x) := hf m x hx - rw [Nat.add_zero] at hroot - simp_rw [hroot] at this - apply rfindAboveAux_base - assumption - · assumption + apply isChurch_trans (a' := x) <;> + grind [rfindAboveAux_base] case succ.a n ih => - unfold RFindAbove apply isChurch_trans (a' := RFindAbove ⬝ (SKI.Succ ⬝ x) ⬝ f) · let y := (fNat m).pred - have : IsChurch (y+1) (f ⬝ x) := by + have : IsChurch (y + 1) (f ⬝ x) := by subst y exact Nat.succ_pred_eq_of_ne_zero (hpos 0 (by simp)) ▸ hf m x hx apply rfindAboveAux_step assumption - · replace ih := ih (SKI.Succ ⬝ x) (m+1) (succ_correct _ x hx) + · replace ih := ih (SKI.Succ ⬝ x) (m + 1) (succ_correct _ x hx) grind -- close the `h` goals of the above `apply isChurch_trans` all_goals {apply MRed.head; apply MRed.head; exact fixedPoint_correct _} @@ -322,8 +341,8 @@ theorem add_def (a b : SKI) : (SKI.Add ⬝ a ⬝ b) ↠ a ⬝ SKI.Succ ⬝ b := AddPoly.toSKI_correct [a, b] (by simp) theorem add_correct (n m : Nat) (a b : SKI) (ha : IsChurch n a) (hb : IsChurch m b) : - IsChurch (n+m) (SKI.Add ⬝ a ⬝ b) := by - refine isChurch_trans (n+m) (a' := Church n SKI.Succ b) ?_ ?_ + IsChurch (n + m) (SKI.Add ⬝ a ⬝ b) := by + refine isChurch_trans (n + m) (a' := Church n SKI.Succ b) ?_ ?_ · calc _ ↠ a ⬝ SKI.Succ ⬝ b := add_def a b _ ↠ Church n SKI.Succ b := ha SKI.Succ b @@ -342,15 +361,15 @@ theorem mul_def (a b : SKI) : (SKI.Mul ⬝ a ⬝ b) ↠ a ⬝ (SKI.Add ⬝ b) MulPoly.toSKI_correct [a, b] (by simp) theorem mul_correct {n m : Nat} {a b : SKI} (ha : IsChurch n a) (hb : IsChurch m b) : - IsChurch (n*m) (SKI.Mul ⬝ a ⬝ b) := by - refine isChurch_trans (n*m) (a' := Church n (SKI.Add ⬝ b) SKI.Zero) ?_ ?_ + IsChurch (n * m) (SKI.Mul ⬝ a ⬝ b) := by + refine isChurch_trans (n * m) (a' := Church n (SKI.Add ⬝ b) SKI.Zero) ?_ ?_ · exact Trans.trans (mul_def a b) (ha (SKI.Add ⬝ b) SKI.Zero) · clear ha induction n with | zero => simp_rw [Nat.zero_mul, Church]; exact zero_correct | succ n ih => simp_rw [Nat.add_mul, Nat.one_mul, Nat.add_comm, Church] - exact add_correct m (n*m) b (Church n (SKI.Add ⬝ b) SKI.Zero) hb ih + exact add_correct m (n * m) b (Church n (SKI.Add ⬝ b) SKI.Zero) hb ih /-- Subtraction: λ n m. n Pred m -/ def SubPoly : SKI.Polynomial 2 := &1 ⬝' Pred ⬝' &0 @@ -360,8 +379,8 @@ theorem sub_def (a b : SKI) : (SKI.Sub ⬝ a ⬝ b) ↠ b ⬝ Pred ⬝ a := SubPoly.toSKI_correct [a, b] (by simp) theorem sub_correct (n m : Nat) (a b : SKI) (ha : IsChurch n a) (hb : IsChurch m b) : - IsChurch (n-m) (SKI.Sub ⬝ a ⬝ b) := by - refine isChurch_trans (n-m) (a' := Church m Pred a) ?_ ?_ + IsChurch (n - m) (SKI.Sub ⬝ a ⬝ b) := by + refine isChurch_trans (n - m) (a' := Church m Pred a) ?_ ?_ · calc _ ↠ b ⬝ Pred ⬝ a := sub_def a b _ ↠ Church m Pred a := hb Pred a diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Opening.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Opening.lean index 0e80f1bfb..b8a94a2a7 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Opening.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Opening.lean @@ -87,7 +87,7 @@ lemma openRec_neq_eq {σ τ γ : Ty Var} (neq : X ≠ Y) (h : σ⟦Y ↝ τ⟧ /-- A locally closed type is unchanged by opening. -/ lemma openRec_lc {σ τ : Ty Var} (lc : σ.LC) : σ = σ⟦X ↝ τ⟧ᵞ := by induction lc generalizing X with - | all => have := fresh_exists <| free_union Var; grind [openRec_neq_eq] + | all => grind [fresh_exists <| free_union Var, openRec_neq_eq] | _ => grind omit [HasFresh Var] in @@ -102,8 +102,7 @@ lemma subst_fresh (nmem : X ∉ γ.fv) (δ : Ty Var) : γ = γ[X := δ] := by /-- Substitution of a locally closed type distributes with opening. -/ lemma openRec_subst (Y : ℕ) (σ τ : Ty Var) (lc : δ.LC) (X : Var) : (σ⟦Y ↝ τ⟧ᵞ)[X := δ] = σ[X := δ]⟦Y ↝ τ[X := δ]⟧ᵞ := by - induction σ generalizing Y - all_goals grind [openRec_lc] + induction σ generalizing Y <;> grind [openRec_lc] /-- Specialize `Ty.openRec_subst` to the first opening. -/ lemma open_subst (σ τ : Ty Var) (lc : δ.LC) (X : Var) : (σ ^ᵞ τ)[X := δ] = σ[X := δ] ^ᵞ τ[X := δ] @@ -125,9 +124,9 @@ lemma open_subst_intro (δ : Ty Var) (nmem : X ∉ γ.fv) : γ ^ᵞ δ = (γ ^ openRec_subst_intro _ _ nmem lemma subst_lc (σ_lc : σ.LC) (τ_lc : τ.LC) (X : Var) : σ[X := τ].LC := by - induction σ_lc - case all => apply LC.all (free_union Var) <;> grind [openRec_subst] - all_goals grind [openRec_subst] + induction σ_lc with + | all => grind [LC.all (free_union Var), openRec_subst] + | _ => grind [openRec_subst] omit [HasFresh Var] in lemma nmem_fv_openRec (nmem : X ∉ (σ⟦k ↝ γ⟧ᵞ).fv) : X ∉ σ.fv := by @@ -226,11 +225,10 @@ lemma openRec_tm_ty_eq (eq : t⟦x ↝ s⟧ᵗᵗ = t⟦x ↝ s⟧ᵗᵗ⟦y ↝ /-- A locally closed term is unchanged by type opening. -/ @[scoped grind =_] lemma openRec_ty_lc {t : Term Var} (lc : t.LC) : t = t⟦X ↝ σ⟧ᵗᵞ := by - induction lc generalizing X - case let' | case | tabs | abs => - have := fresh_exists <| free_union Var - congr <;> grind [Ty.openRec_lc, openRec_ty_neq_eq] - all_goals grind [Ty.openRec_lc] + induction lc generalizing X with + | let' | case | tabs | abs => + grind [fresh_exists <| free_union Var, Ty.openRec_lc, openRec_ty_neq_eq] + | _ => grind [Ty.openRec_lc] /-- Substitution of a type within a term. -/ @[scoped grind =] @@ -318,11 +316,10 @@ variable [HasFresh Var] /-- A locally closed term is unchanged by term opening. -/ @[scoped grind =_] lemma openRec_tm_lc (lc : t.LC) : t = t⟦x ↝ s⟧ᵗᵗ := by - induction lc generalizing x - case let' | case | tabs | abs => - have := fresh_exists <| free_union Var - congr <;> grind [openRec_tm_neq_eq, openRec_ty_tm_eq] - all_goals grind + induction lc generalizing x with + | let' | case | tabs | abs => + grind [fresh_exists <| free_union Var, openRec_tm_neq_eq, openRec_ty_tm_eq] + | _ => grind variable {t s : Term Var} {δ : Ty Var} {x : Var} diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Reduction.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Reduction.lean index 04264bdaf..eb83e4178 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Reduction.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Reduction.lean @@ -6,7 +6,7 @@ Authors: Chris Henson module -public meta import Cslib.Foundations.Semantics.ReductionSystem.Basic +public meta import Cslib.Foundations.Data.Relation public import Cslib.Languages.LambdaCalculus.LocallyNameless.Fsub.Opening @[expose] public section @@ -48,27 +48,19 @@ variable [DecidableEq Var] @[scoped grind _=_] lemma body_let : (let' t₁ t₂).LC ↔ t₁.LC ∧ t₂.body := by constructor <;> intro h <;> cases h - case mp.let' L _ _ => - split_ands - · grind - · exists L + case mp.let' L t₁_lc h => exact ⟨t₁_lc, L, h⟩ case mpr.intro body => obtain ⟨_, _⟩ := body - apply LC.let' (free_union Var) <;> grind + grind [LC.let' <| free_union Var] /-- Locally closed case bindings have a locally closed bodies. -/ @[scoped grind _=_] lemma body_case : (case t₁ t₂ t₃).LC ↔ t₁.LC ∧ t₂.body ∧ t₃.body := by constructor <;> intro h - case mp => - cases h with | case L => - split_ands - · grind - · exists L - · exists L + case mp => cases h with | case L t₁_lc h₂ h₃ => exact ⟨t₁_lc, ⟨L, h₂⟩, ⟨L, h₃⟩⟩ case mpr => obtain ⟨_, ⟨_, _⟩, ⟨_, _⟩⟩ := h - apply LC.case (free_union Var) <;> grind + grind [LC.case <| free_union Var] variable [HasFresh Var] @@ -76,8 +68,7 @@ variable [HasFresh Var] @[scoped grind <=] lemma open_tm_body (body : t₁.body) (lc : t₂.LC) : (t₁ ^ᵗᵗ t₂).LC := by cases body - have := fresh_exists <| free_union [fv_tm] Var - grind [subst_tm_lc, open_tm_subst_tm_intro] + grind [fresh_exists <| free_union [fv_tm] Var, subst_tm_lc, open_tm_subst_tm_intro] end @@ -94,7 +85,7 @@ lemma Value.lc {t : Term Var} (val : t.Value) : t.LC := by induction val <;> grind /-- The call-by-value reduction relation. -/ -@[grind, reduction_sys rs "βᵛ"] +@[grind, reduction_sys "βᵛ"] inductive Red : Term Var → Term Var → Prop | appₗ : LC t₂ → Red t₁ t₁' → Red (app t₁ t₂) (app t₁' t₂) | appᵣ : Value t₁ → Red t₂ t₂' → Red (app t₁ t₂) (app t₁ t₂') @@ -109,11 +100,6 @@ inductive Red : Term Var → Term Var → Prop | case_inl : Value t₁ → t₂.body → t₃.body → Red (case (inl t₁) t₂ t₃) (t₂ ^ᵗᵗ t₁) | case_inr : Value t₁ → t₂.body → t₃.body → Red (case (inr t₁) t₂ t₃) (t₃ ^ᵗᵗ t₁) -@[grind _=_] -lemma rs_eq {t t' : Term Var} : t ⭢βᵛ t' ↔ Red t t' := by - have : (@rs Var).Red = Red := by rfl - simp_all - variable [HasFresh Var] [DecidableEq Var] in /-- Terms of a reduction are locally closed. -/ lemma Red.lc {t t' : Term Var} (red : t ⭢βᵛ t') : t.LC ∧ t'.LC := by @@ -122,8 +108,9 @@ lemma Red.lc {t t' : Term Var} (red : t ⭢βᵛ t') : t.LC ∧ t'.LC := by split_ands · grind · cases lc - have := fresh_exists <| free_union [fv_tm, fv_ty] Var - grind [subst_tm_lc, subst_ty_lc, open_tm_subst_tm_intro, open_ty_subst_ty_intro] + grind [ + fresh_exists <| free_union [fv_tm, fv_ty] Var, subst_tm_lc, + subst_ty_lc, open_tm_subst_tm_intro, open_ty_subst_ty_intro] all_goals grind end Term diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Safety.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Safety.lean index e6baf9aff..e42665c88 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Safety.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Safety.lean @@ -42,23 +42,22 @@ lemma Typing.preservation (der : Typing Γ t τ) (step : t ⭢βᵛ t') : Typing case abs der _ _ => have sub : Sub Γ (σ.arrow τ) (σ.arrow τ) := by grind [Sub.refl] have ⟨_, _, ⟨_, _⟩⟩ := der.abs_inv sub - have ⟨_, _⟩ := fresh_exists <| free_union [fv_tm] Var - grind [open_tm_subst_tm_intro, subst_tm, Sub.weaken] + grind [fresh_exists <| free_union [fv_tm] Var, open_tm_subst_tm_intro, subst_tm, Sub.weaken] case tapp Γ _ σ τ σ' _ _ _ => cases step case tabs der _ _ => have sub : Sub Γ (σ.all τ) (σ.all τ) := by grind [Sub.refl] have ⟨_, _, ⟨_, _⟩⟩ := der.tabs_inv sub - have ⟨X, _⟩ := fresh_exists <| free_union [Ty.fv, fv_ty] Var - have : Γ = (Context.map_val (·[X:=σ']) []) ++ Γ := by simp + have ⟨X, mem⟩ := fresh_exists <| free_union [Ty.fv, fv_ty] Var + simp at mem + have : Γ = (Context.map_val (·[X:=σ']) []) ++ Γ := by grind rw [open_ty_subst_ty_intro (X := X), open_subst_intro (X := X)] <;> grind [subst_ty] case tapp => grind case let' Γ _ _ _ _ L der _ ih₁ _ => cases step case let_bind red₁ _ => apply Typing.let' L (ih₁ red₁); grind case let_body => - have ⟨x, _⟩ := fresh_exists <| free_union [fv_tm] Var - grind [open_tm_subst_tm_intro, subst_tm] + grind [fresh_exists <| free_union [fv_tm] Var, open_tm_subst_tm_intro, subst_tm] case case Γ _ σ τ _ _ _ L _ _ _ ih₁ _ _ => have sub : Sub Γ (σ.sum τ) (σ.sum τ) := by grind [Sub.refl] have : Γ = [] ++ Γ := by rfl @@ -66,12 +65,10 @@ lemma Typing.preservation (der : Typing Γ t τ) (step : t ⭢βᵛ t') : Typing case «case» red₁ _ _ => apply Typing.case L (ih₁ red₁) <;> grind case case_inl der _ _ => have ⟨_, ⟨_, _⟩⟩ := der.inl_inv sub - have ⟨x, _⟩ := fresh_exists <| free_union [fv_tm] Var - grind [open_tm_subst_tm_intro, subst_tm] + grind [fresh_exists <| free_union [fv_tm] Var, open_tm_subst_tm_intro, subst_tm] case case_inr der _ _ => have ⟨_, ⟨_, _⟩⟩ := der.inr_inv sub - have ⟨x, _⟩ := fresh_exists <| free_union [fv_tm] Var - grind [open_tm_subst_tm_intro, subst_tm] + grind [fresh_exists <| free_union [fv_tm] Var, open_tm_subst_tm_intro, subst_tm] all_goals grind [cases Red] /-- Any typable term either has a reduction step or is a value. -/ diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Subtype.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Subtype.lean index 0081c8160..e27d7d34b 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Subtype.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Subtype.lean @@ -55,15 +55,13 @@ variable {Γ Δ Θ : Env Var} {σ τ δ : Ty Var} @[grind →] lemma wf (Γ : Env Var) (σ σ' : Ty Var) (sub : Sub Γ σ σ') : Γ.Wf ∧ σ.Wf Γ ∧ σ'.Wf Γ := by induction sub with - | all => - refine ⟨by grind, ?_, ?_⟩ <;> - apply Wf.all (free_union Var) <;> grind [Wf.narrow_cons, cases Env.Wf, cases LC] + | all => grind [Wf.all (free_union Var), Wf.narrow_cons, cases Env.Wf, cases LC] | _ => grind /-- Subtypes are reflexive when well-formed. -/ lemma refl (wf_Γ : Γ.Wf) (wf_σ : σ.Wf Γ) : Sub Γ σ σ := by induction wf_σ with - | all => apply all (free_union [Context.dom] Var) <;> grind + | all => grind [all (free_union [Context.dom] Var)] | _ => grind /-- Weakening of subtypes. -/ @@ -125,7 +123,7 @@ lemma trans : Sub Γ σ δ → Sub Γ δ τ → Sub Γ σ τ := by cases eq cases sub₂ case refl.top Γ σ'' τ'' _ _ _ _ _ _ _ => - have : Sub Γ (σ''.all τ'') (σ'.all τ') := by apply all (free_union Var) <;> grind + have : Sub Γ (σ''.all τ'') (σ'.all τ') := by grind [all <| free_union Var] grind case refl.all Γ _ _ _ _ _ σ _ _ _ _ _ _ => apply all (free_union Var) @@ -141,7 +139,7 @@ instance (Γ : Env Var) : Trans (Sub Γ) (Sub Γ) (Sub Γ) := /-- Narrowing of subtypes. -/ lemma narrow (sub_δ : Sub Δ δ δ') (sub_narrow : Sub (Γ ++ ⟨X, Binding.sub δ'⟩ :: Δ) σ τ) : Sub (Γ ++ ⟨X, Binding.sub δ⟩ :: Δ) σ τ := by - apply narrow_aux (δ := δ') <;> grind + grind [narrow_aux (δ := δ')] variable [HasFresh Var] in /-- Subtyping of substitutions. -/ @@ -164,9 +162,9 @@ lemma map_subst (sub₁ : Sub (Γ ++ ⟨X, Binding.sub δ'⟩ :: Δ) σ τ) (sub /-- Strengthening of subtypes. -/ lemma strengthen (sub : Sub (Γ ++ ⟨X, Binding.ty δ⟩ :: Δ) σ τ) : Sub (Γ ++ Δ) σ τ := by generalize eq : Γ ++ ⟨X, Binding.ty δ⟩ :: Δ = Θ at sub - induction sub generalizing Γ - case all => apply Sub.all (free_union Var) <;> grind - all_goals grind [to_ok, Wf.strengthen, Env.Wf.strengthen] + induction sub generalizing Γ with + | all => grind [Sub.all (free_union Var)] + | _ => grind [to_ok, Wf.strengthen, Env.Wf.strengthen] end Sub diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Typing.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Typing.lean index 085027f28..9743f2ab4 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Typing.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/Typing.lean @@ -65,12 +65,16 @@ attribute [grind .] Typing.var Typing.app Typing.tapp Typing.sub Typing.inl Typi /-- Typings have well-formed contexts and types. -/ @[grind →] lemma wf {Γ : Env Var} {t : Term Var} {τ : Ty Var} (der : Typing Γ t τ) : Γ.Wf ∧ t.LC ∧ τ.Wf Γ := by - induction der <;> let L := free_union Var <;> have := fresh_exists L - case tabs => refine ⟨?_, LC.tabs L ?_ ?_, Ty.Wf.all L ?_ ?_⟩ <;> grind [cases Env.Wf] - case abs => refine ⟨?_, LC.abs L ?_ ?_, ?_⟩ <;> grind [Wf.strengthen, cases Env.Wf] - case let' => refine ⟨?_, LC.let' L ?_ ?_, ?_⟩ <;> grind [Ty.Wf.strengthen] + induction der <;> let L := free_union Var <;> have ⟨x, nmem⟩ := fresh_exists L + case tabs ih => + cases (ih x (by grind)).left + grind [LC.tabs L, Ty.Wf.all L] + case abs ih => + cases (ih x (by grind)).left + grind [LC.abs L, Wf.strengthen] + case let' => grind [LC.let' L, Ty.Wf.strengthen] case case => refine ⟨?_, LC.case L ?_ ?_ ?_, ?_⟩ <;> grind [Ty.Wf.strengthen] - all_goals grind [of_bind_ty, open_lc, cases Env.Wf, cases Ty.Wf] + all_goals grind [of_bind_ty, open_lc, cases Ty.Wf] /-- Weakening of typings. -/ lemma weaken (der : Typing (Γ ++ Δ) t τ) (wf : (Γ ++ Θ ++ Δ).Wf) : @@ -89,8 +93,7 @@ lemma weaken_head (der : Typing Δ t τ) (wf : (Γ ++ Δ).Wf) : Typing (Γ ++ Δ) t τ := by have eq : Δ = [] ++ Δ := by rfl rw [eq] at der - have := Typing.weaken der wf - grind + grind [Typing.weaken der wf] /-- Narrowing of typings. -/ lemma narrow (sub : Sub Δ δ δ') (der : Typing (Γ ++ ⟨X, Binding.sub δ'⟩ :: Δ) t τ) : @@ -121,15 +124,9 @@ lemma subst_tm (der : Typing (Γ ++ ⟨X, .ty σ⟩ :: Δ) t τ) (der_sub : Typi -/ grind [→ List.mem_dlookup, weaken_head, Env.Wf.strengthen, -append_assoc] · grind [Env.Wf.strengthen, => List.perm_dlookup] - case abs => - apply abs (free_union Var) - grind [open_tm_subst_tm_var] - case tabs => - apply tabs (free_union Var) - grind [open_ty_subst_tm_var] - case let' der _ => - apply let' (free_union Var) (der eq) - grind [open_tm_subst_tm_var] + case abs => grind [abs (free_union Var), open_tm_subst_tm_var] + case tabs => grind [tabs (free_union Var), open_ty_subst_tm_var] + case let' der _ => grind [let' (free_union Var) (der eq), open_tm_subst_tm_var] case case der _ _ => apply case (free_union Var) (der eq) <;> grind [open_tm_subst_tm_var] all_goals grind [Env.Wf.strengthen, Ty.Wf.strengthen, Sub.strengthen] @@ -141,16 +138,10 @@ lemma subst_ty (der : Typing (Γ ++ ⟨X, Binding.sub δ'⟩ :: Δ) t τ) (sub : induction der generalizing Γ X case var σ _ X' _ mem => have := map_subst_nmem Δ X δ - have : Γ ++ ⟨X, .sub δ'⟩ :: Δ ~ ⟨X, .sub δ'⟩ :: (Γ ++ Δ) := perm_middle - have : .ty σ ∈ dlookup X' (⟨X, .sub δ'⟩ :: (Γ ++ Δ)) := by grind [perm_dlookup] have := @map_val_mem Var (f := ((·[X:=δ]) : Binding Var → Binding Var)) grind [Env.Wf.map_subst, → notMem_keys_of_nodupKeys_cons] - case abs => - apply abs (free_union [Ty.fv] Var) - grind [Ty.subst_fresh, open_tm_subst_ty_var] - case tabs => - apply tabs (free_union Var) - grind [open_ty_subst_ty_var, open_subst_var] + case abs => grind [abs (free_union [Ty.fv] Var), Ty.subst_fresh, open_tm_subst_ty_var] + case tabs => grind [tabs (free_union Var), open_ty_subst_ty_var, open_subst_var] case let' der _ => apply let' (free_union Var) (der eq) grind [open_tm_subst_ty_var] diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/WellFormed.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/WellFormed.lean index 6d0157fd0..34c5172d0 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/WellFormed.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Fsub/WellFormed.lean @@ -68,13 +68,13 @@ open scoped Env.Wf @[grind →] theorem lc (wf : σ.Wf Γ) : σ.LC := by induction wf with - | all => apply LC.all (free_union Var) <;> grind + | all => grind [LC.all (free_union Var)] | _ => grind /-- A type remains well-formed under context permutation. -/ theorem perm_env (wf : σ.Wf Γ) (perm : Γ ~ Δ) (ok_Γ : Γ✓) (ok_Δ : Δ✓) : σ.Wf Δ := by induction wf generalizing Δ with - | all => apply all (free_union [dom] Var) <;> grind [Perm.cons, nodupKeys_cons] + | all => grind [all <| free_union [dom] Var, Perm.cons, nodupKeys_cons] | _ => grind [perm_dlookup] /-- A type remains well-formed under context weakening (in the middle). -/ @@ -109,14 +109,13 @@ lemma narrow (wf : σ.Wf (Γ ++ ⟨X, Binding.sub τ⟩ :: Δ)) (ok : (Γ ++ ⟨ lemma narrow_cons (wf : σ.Wf (⟨X, Binding.sub τ⟩ :: Δ)) (ok : (⟨X, Binding.sub τ'⟩ :: Δ)✓) : σ.Wf (⟨X, Binding.sub τ'⟩ :: Δ) := by - rw [←List.nil_append (⟨X, sub τ'⟩ :: Δ)] - grind [narrow] + grind [List.nil_append (⟨X, sub τ'⟩ :: Δ), narrow] /-- A type remains well-formed under context strengthening. -/ lemma strengthen (wf : σ.Wf (Γ ++ ⟨X, Binding.ty τ⟩ :: Δ)) : σ.Wf (Γ ++ Δ) := by generalize eq : Γ ++ ⟨X, Binding.ty τ⟩ :: Δ = Θ at wf induction wf generalizing Γ with - | all => apply all (free_union [Context.dom] Var) <;> grind + | all => grind [all <| free_union [Context.dom] Var] | _ => grind variable [HasFresh Var] in @@ -125,13 +124,9 @@ lemma map_subst (wf_σ : σ.Wf (Γ ++ ⟨X, Binding.sub τ⟩ :: Δ)) (wf_τ' : (ok : (Γ.map_val (·[X:=τ']) ++ Δ)✓) : σ[X:=τ'].Wf <| Γ.map_val (·[X:=τ']) ++ Δ := by have := @map_val_mem Var (Binding Var) generalize eq : Γ ++ ⟨X, Binding.sub τ⟩ :: Δ = Θ at wf_σ - induction wf_σ generalizing Γ τ' - case all γ _ _ _ _ _ _ => - subst eq - apply all (free_union [dom] Var) - · grind - · grind [open_subst_var] - all_goals grind [weaken_head] + induction wf_σ generalizing Γ τ' with + | all => apply all (free_union [dom] Var) <;> grind [open_subst_var] + | _ => grind [weaken_head] variable [HasFresh Var] in /-- A type remains well-formed under opening (to a well-formed type). -/ @@ -159,7 +154,7 @@ variable [HasFresh Var] in /-- A variable not appearing in a context does not appear in its well-formed types. -/ lemma nmem_fv {σ : Ty Var} (wf : σ.Wf Γ) (nmem : X ∉ Γ.dom) : X ∉ σ.fv := by induction wf with - | all => have := fresh_exists <| free_union [dom] Var; grind [nmem_fv_open, openRec_lc] + | all => grind [fresh_exists <| free_union [dom] Var, nmem_fv_open, openRec_lc] | _ => grind [dlookup_isSome] end Ty.Wf diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Stlc/Basic.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Stlc/Basic.lean index 563d05749..d1e880f37 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Stlc/Basic.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Stlc/Basic.lean @@ -78,19 +78,13 @@ theorem perm (ht : Γ ⊢ t ∶ τ) (hperm : Γ.Perm Δ) : Δ ⊢ t ∶ τ := by /-- Weakening of a typing derivation with an appended context. -/ lemma weaken_aux (der : Γ ++ Δ ⊢ t ∶ τ) : (Γ ++ Θ ++ Δ)✓ → (Γ ++ Θ ++ Δ) ⊢ t ∶ τ := by generalize eq : Γ ++ Δ = Γ_Δ at der - induction der generalizing Γ Δ Θ <;> intros ok_Γ_Θ_Δ - case abs σ Γ' τ t xs ext ih => - apply Typing.abs (xs ∪ (Γ ++ Θ ++ Δ).dom) - intros x _ - have h : ⟨x, σ⟩ :: Γ ++ Δ = ⟨x, σ⟩ :: Γ' := by grind - refine @ih x (by grind) _ _ Θ h ?_ - simp_all [HasWellFormed.wf] - all_goals grind + induction der generalizing Γ Δ Θ with + | abs xs => grind [Typing.abs (xs ∪ (Γ ++ Θ ++ Δ).dom), List.nodupKeys_cons] + | _ => grind /-- Weakening of a typing derivation by an additional context. -/ lemma weaken (der : Γ ⊢ t ∶ τ) (ok : (Γ ++ Δ)✓) : Γ ++ Δ ⊢ t ∶ τ := by - rw [←List.append_nil (Γ ++ Δ)] at * - exact weaken_aux (by simp_all) ok + grind [List.append_nil (Γ ++ Δ), weaken_aux] omit [DecidableEq Var] in /-- Typing derivations exist only for locally closed terms. -/ @@ -109,46 +103,28 @@ lemma subst_aux (h : Δ ++ ⟨x, σ⟩ :: Γ ⊢ t ∶ τ) (der : Γ ⊢ s ∶ generalize eq : Δ ++ ⟨x, σ⟩ :: Γ = Θ at h induction h generalizing Γ Δ der case app => grind - case var x' τ ok mem => - simp only [subst_fvar] + case var x' _ ok _ => subst eq cases ((List.perm_nodupKeys (by simp)).mp ok : (⟨x, σ⟩ :: Δ ++ Γ)✓) - case cons ok_weak _ => + case cons => observe perm : (Γ ++ Δ).Perm (Δ ++ Γ) - by_cases h : x = x' <;> simp only [h] + by_cases h : x = x' case neg => grind - case pos nmem => - subst h - have nmem_Γ : ∀ γ, ⟨x, γ⟩ ∉ Γ := by - intros γ _ - exact nmem x (List.mem_keys.mpr ⟨γ, by simp_all⟩) rfl - have nmem_Δ : ∀ γ, ⟨x, γ⟩ ∉ Δ := by - intros γ _ - exact nmem x (List.mem_keys.mpr ⟨γ, by simp_all⟩) rfl - have eq' : τ = σ := by - simp only [List.mem_append, List.mem_cons, Sigma.mk.injEq, heq_eq_eq] at mem - match mem with | _ => simp_all - rw [eq'] - refine (weaken der ?_).perm perm - grind - case abs σ Γ' t T2 xs ih' ih => - apply Typing.abs (free_union Var) - intros - rw [subst_def, ←subst_open_var _ _ _ _ ?_ der.lc] <;> grind + case pos => grind [(weaken der ?_).perm perm] + case abs => + grind [Typing.abs <| free_union Var, subst_open_var _ _ _ _ ?_ der.lc] /-- Substitution for a context weakened by a single type. -/ lemma typing_subst_head (weak : ⟨x, σ⟩ :: Γ ⊢ t ∶ τ) (der : Γ ⊢ s ∶ σ) : Γ ⊢ (t [x := s]) ∶ τ := by - rw [←List.nil_append Γ] - exact subst_aux weak der + grind [subst_aux] /-- Typing preservation for opening. -/ theorem preservation_open {xs : Finset Var} (cofin : ∀ x ∉ xs, ⟨x, σ⟩ :: Γ ⊢ m ^ fvar x ∶ τ) (der : Γ ⊢ n ∶ σ) : Γ ⊢ m ^ n ∶ τ := by have ⟨fresh, _⟩ := fresh_exists <| free_union [Term.fv] Var - rw [subst_intro fresh n m (by grind) der.lc] - exact typing_subst_head (by grind) der + grind [subst_intro fresh _ _ ?_ der.lc, typing_subst_head] end LambdaCalculus.LocallyNameless.Stlc.Typing diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBeta.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBeta.lean index ebda02b36..bb0a866c7 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBeta.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBeta.lean @@ -6,7 +6,7 @@ Authors: Chris Henson module -public meta import Cslib.Foundations.Semantics.ReductionSystem.Basic +public meta import Cslib.Foundations.Data.Relation public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.Properties public section @@ -32,7 +32,7 @@ variable {Var : Type u} namespace LambdaCalculus.LocallyNameless.Untyped.Term /-- A single β-reduction step. -/ -@[reduction_sys fullBetaRs "βᶠ"] +@[reduction_sys "βᶠ"] inductive FullBeta : Term Var → Term Var → Prop /-- Reduce an application to a lambda term. -/ | beta : LC (abs M)→ LC N → FullBeta (app (abs M) N) (M ^ N) @@ -49,12 +49,6 @@ attribute [scoped grind .] appL appR variable {M M' N N' : Term Var} ---- TODO: I think this could be generated along with the ReductionSystem -@[scoped grind _=_] -lemma fullBetaRs_Red_eq : M ⭢βᶠ N ↔ FullBeta M N := by - have : (@fullBetaRs Var).Red = FullBeta := by rfl - simp_all - /-- The left side of a reduction is locally closed. -/ @[scoped grind →] lemma step_lc_l (step : M ⭢βᶠ M') : LC M := by @@ -64,16 +58,12 @@ lemma step_lc_l (step : M ⭢βᶠ M') : LC M := by /-- Left congruence rule for application in multiple reduction. -/ @[scoped grind ←] theorem redex_app_l_cong (redex : M ↠βᶠ M') (lc_N : LC N) : app M N ↠βᶠ app M' N := by - induction redex - case refl => rfl - case tail ih r => exact Relation.ReflTransGen.tail r (appR lc_N ih) + induction redex <;> grind /-- Right congruence rule for application in multiple reduction. -/ @[scoped grind ←] theorem redex_app_r_cong (redex : M ↠βᶠ M') (lc_N : LC N) : app N M ↠βᶠ app N M' := by - induction redex - case refl => rfl - case tail ih r => exact Relation.ReflTransGen.tail r (appL lc_N ih) + induction redex <;> grind variable [HasFresh Var] [DecidableEq Var] @@ -93,22 +83,19 @@ lemma redex_subst_cong (s s' : Term Var) (x y : Var) (step : s ⭢βᶠ s') : rw [subst_open x (fvar y) n m (by grind)] refine beta ?_ (by grind) exact subst_lc (LC.abs xs m mem) (LC.fvar y) - case abs m' m xs mem ih => - apply abs (free_union Var) - grind + case abs => grind [abs <| free_union Var] all_goals grind /-- Abstracting then closing preserves a single reduction. -/ lemma step_abs_close {x : Var} (step : M ⭢βᶠ M') : M⟦0 ↜ x⟧.abs ⭢βᶠ M'⟦0 ↜ x⟧.abs := by - apply abs ∅ - grind [redex_subst_cong] + grind [abs ∅, redex_subst_cong] /-- Abstracting then closing preserves multiple reductions. -/ lemma redex_abs_close {x : Var} (step : M ↠βᶠ M') : (M⟦0 ↜ x⟧.abs ↠βᶠ M'⟦0 ↜ x⟧.abs) := by induction step using Relation.ReflTransGen.trans_induction_on case refl => rfl case single ih => exact Relation.ReflTransGen.single (step_abs_close ih) - case trans l r => exact .trans l r + case trans l r => exact Relation.ReflTransGen.trans l r /-- Multiple reduction of opening implies multiple reduction of abstraction. -/ theorem redex_abs_cong (xs : Finset Var) (cofin : ∀ x ∉ xs, (M ^ fvar x) ↠βᶠ (M' ^ fvar x)) : diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBetaConfluence.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBetaConfluence.lean index 0eaff53fd..37056404f 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBetaConfluence.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/FullBetaConfluence.lean @@ -26,7 +26,7 @@ namespace LambdaCalculus.LocallyNameless.Untyped.Term open Relation /-- A parallel β-reduction step. -/ -@[reduction_sys paraRs "ₚ"] +@[reduction_sys "ₚ"] inductive Parallel : Term Var → Term Var → Prop /-- Free variables parallel step to themselves. -/ | fvar (x : Var) : Parallel (fvar x) (fvar x) @@ -48,12 +48,6 @@ attribute [scoped grind cases] Parallel variable {M M' N N' : Term Var} ---- TODO: I think this could be generated along with the ReductionSystem -@[scoped grind _=_] -private lemma para_rs_Red_eq : M ⭢ₚ N ↔ Parallel M N := by - have : (@paraRs Var).Red = Parallel := by rfl - simp_all - /-- The left side of a parallel reduction is locally closed. -/ @[scoped grind →] lemma para_lc_l (step : M ⭢ₚ N) : LC M := by @@ -92,7 +86,8 @@ lemma para_to_redex (para : M ⭢ₚ N) : M ↠βᶠ N := by induction para case fvar => constructor case app L L' R R' l_para m_para redex_l redex_m => - refine .trans (?_ : L.app R ↠βᶠ L'.app R) (?_ : L'.app R ↠βᶠ L'.app R') <;> grind + have : L.app R ↠βᶠ L'.app R := by grind + grind [ReflTransGen.trans] case abs t t' xs _ ih => apply redex_abs_cong xs grind @@ -110,21 +105,19 @@ lemma para_to_redex (para : M ⭢ₚ N) : M ↠βᶠ N := by /-- Multiple parallel reduction is equivalent to multiple β-reduction. -/ theorem parachain_iff_redex : M ↠ₚ N ↔ M ↠βᶠ N := by refine Iff.intro ?chain_redex ?redex_chain <;> intros h <;> induction h <;> try rfl - case redex_chain.tail redex chain => exact ReflTransGen.tail chain (step_to_para redex) - case chain_redex.tail para redex => exact ReflTransGen.trans redex (para_to_redex para) + case redex_chain redex chain => exact ReflTransGen.tail chain (step_to_para redex) + case chain_redex para redex => exact ReflTransGen.trans redex (para_to_redex para) /-- Parallel reduction respects substitution. -/ @[scoped grind .] lemma para_subst (x : Var) (pm : M ⭢ₚ M') (pn : N ⭢ₚ N') : M[x := N] ⭢ₚ M'[x := N'] := by - induction pm - case fvar => grind - case beta => + induction pm with + | beta => rw [subst_open _ _ _ _ (by grind)] refine Parallel.beta (free_union Var) ?_ ?_ <;> grind - case app => constructor <;> assumption - case abs u u' xs mem ih => - apply Parallel.abs (free_union Var) - grind + | app => constructor <;> assumption + | abs => grind [Parallel.abs (free_union Var)] + | _ => grind /-- Parallel substitution respects closing and opening. -/ lemma para_open_close (x y z) (para : M ⭢ₚ M') : M⟦z ↜ x⟧⟦z ↝ fvar y⟧ ⭢ₚ M'⟦z ↜ x⟧⟦z ↝ fvar y⟧ := @@ -133,8 +126,7 @@ lemma para_open_close (x y z) (para : M ⭢ₚ M') : M⟦z ↜ x⟧⟦z ↝ fvar /-- Parallel substitution respects fresh opening. -/ lemma para_open_out (L : Finset Var) (mem : ∀ x, x ∉ L → (M ^ fvar x) ⭢ₚ N ^ fvar x) (para : M' ⭢ₚ N') : (M ^ M') ⭢ₚ (N ^ N') := by - let ⟨x, _⟩ := fresh_exists <| free_union [fv] Var - grind + grind [fresh_exists <| free_union [fv] Var] -- TODO: the Takahashi translation would be a much nicer and shorter proof, but I had difficultly -- writing it for locally nameless terms. @@ -213,7 +205,7 @@ theorem confluence_beta : Confluent (@FullBeta Var) := by ext exact parachain_iff_redex rw [Confluent, ←eq] - exact @para_confluence Var _ _ + exact para_confluence end LambdaCalculus.LocallyNameless.Untyped.Term diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/LcAt.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/LcAt.lean new file mode 100644 index 000000000..5fead74fa --- /dev/null +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/LcAt.lean @@ -0,0 +1,89 @@ +/- +Copyright (c) 2026 Elimia (Sehun Kim). All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Elimia (Sehun Kim) +-/ + +module + +public import Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.Properties + +@[expose] public section + +/-! + +Alternative Definitions for LC: + +This module defines `LcAt k M`, a more general definition of local closure. When k = 0, this is +equivalent to `LC`, as shown in `lcAt_iff_LC`. + +-/ + +namespace Cslib.LambdaCalculus.LocallyNameless.Untyped.Term + +universe u + +variable {Var : Type u} + +/-- `LcAt k M` is satisfied when all bound indices of M are smaller than `k`. -/ +@[simp, scoped grind =] +def LcAt (k : ℕ) : Term Var → Prop +| bvar i => i < k +| fvar _ => True +| app t₁ t₂ => LcAt k t₁ ∧ LcAt k t₂ +| abs t => LcAt (k + 1) t + +/-- `depth` counts the maximum number of the lambdas that are enclosing variables. -/ +@[simp, scoped grind =] +def depth : Term Var → ℕ +| bvar _ => 0 +| fvar _ => 0 +| app t₁ t₂ => max (depth t₁) (depth t₂) +| abs t => depth t + 1 + +@[elab_as_elim] +protected lemma ind_on_depth (P : Term Var → Prop) (bvar : ∀ i, P (bvar i)) (fvar : ∀ x, P (fvar x)) + (app : ∀ M N, P M → P N → P (app M N)) (abs : ∀ M, P M → (∀ N, N.depth ≤ M.depth → P N) → P M.abs) + (M : Term Var) : P M := by + have h {d : ℕ} {M : Term Var} (p : M.depth ≤ d) : P M := by + induction d generalizing M with + | zero => induction M <;> grind + | succ => + induction M with + | abs M' => apply abs M' <;> grind + | _ => grind [sup_le_iff] + exact h M.depth.le_refl + +/-- The depth of the lambda expression doesn't change by opening at i-th bound variable + for some free variable. -/ + @[simp, scoped grind =] +lemma depth_openRec_fvar_eq_depth (M : Term Var) (x : Var) (i : ℕ) : + (M⟦i ↝ fvar x⟧).depth = M.depth := by + induction M generalizing i <;> grind + +/-- The depth of the lambda expression doesn't change by opening for some free variable. -/ +theorem depth_open_fvar_eq_depth (M : Term Var) (x : Var) : depth (M ^ fvar x) = depth M := + depth_openRec_fvar_eq_depth M x 0 + +/-- Opening for some free variable at i-th bound variable, increments `LcAt`. -/ +@[simp, scoped grind =] +theorem lcAt_openRec_fvar_iff_lcAt (M : Term Var) (x : Var) (i : ℕ) : + LcAt i (M⟦i ↝ fvar x⟧) ↔ LcAt (i + 1) M := by + induction M generalizing i <;> grind + +/-- Opening for some free variable is locally closed if and only if `M` is `LcAt 1`. -/ +theorem lcAt_open_fvar_iff_lcAt (M : Term Var) (x : Var) : LcAt 0 (M ^ fvar x) ↔ LcAt 1 M := + lcAt_openRec_fvar_iff_lcAt M x 0 + +/-- `M` is `LcAt 0` if and only if `M` is locally closed. -/ +theorem lcAt_iff_LC (M : Term Var) [HasFresh Var] : LcAt 0 M ↔ M.LC := by + induction M using LambdaCalculus.LocallyNameless.Untyped.Term.ind_on_depth with + | abs => + constructor + · grind [LC.abs ∅] + · intros h2 + rcases h2 with ⟨⟩|⟨L,_,_⟩ + grind [fresh_exists L] + | _ => grind [cases LC] + +end Cslib.LambdaCalculus.LocallyNameless.Untyped.Term diff --git a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/Properties.lean b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/Properties.lean index ca3b3c449..f72a8a985 100644 --- a/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/Properties.lean +++ b/Cslib/Languages/LambdaCalculus/LocallyNameless/Untyped/Properties.lean @@ -43,8 +43,7 @@ lemma open_close (x : Var) (t : Term Var) (k : ℕ) (nmem : x ∉ t.fv) : t = t /-- Opening is injective. -/ lemma open_injective (x : Var) (M M') (free_M : x ∉ M.fv) (free_M' : x ∉ M'.fv) (eq : M ^ fvar x = M' ^ fvar x) : M = M' := by - rw [open_close x M 0 free_M, open_close x M' 0 free_M'] - exact congrArg (closeRec 0 x) eq + grind [open_close x M 0 free_M, open_close x M' 0 free_M'] /-- Opening and closing are associative for nonclashing free variables. -/ lemma swap_open_fvar_close (k n : ℕ) (x y : Var) (m : Term Var) (neq₁ : k ≠ n) (neq₂ : x ≠ y) : @@ -79,11 +78,9 @@ omit [DecidableEq Var] in /-- A locally closed term is unchanged by opening. -/ @[scoped grind =_] lemma open_lc (k t) (e : Term Var) (e_lc : e.LC) : e = e⟦k ↝ t⟧ := by - induction e_lc generalizing k - case abs xs e _ ih => - simp only [openRec_abs, abs.injEq] - apply open_lc_aux e 0 (fvar (fresh xs)) (k+1) t <;> grind - all_goals grind + induction e_lc generalizing k with + | abs xs e _ _ => grind [open_lc_aux e 0 (fvar (fresh xs)) (k+1) t] + | _ => grind /-- Substitution of a locally closed term distributes with opening. -/ @[scoped grind =] @@ -116,38 +113,30 @@ set_option linter.unusedDecidableInType false in /-- Opening of locally closed terms is locally closed. -/ @[scoped grind ←] theorem beta_lc {M N : Term Var} (m_lc : M.abs.LC) (n_lc : LC N) : LC (M ^ N) := by - cases m_lc - case abs xs mem => - have ⟨y, _⟩ := fresh_exists <| free_union [fv] Var - grind + cases m_lc with + | abs => grind [fresh_exists <| free_union [fv] Var] /-- Opening then closing is equivalent to substitution. -/ @[scoped grind =] lemma open_close_to_subst (m : Term Var) (x y : Var) (k : ℕ) (m_lc : LC m) : m ⟦k ↜ x⟧⟦k ↝ fvar y⟧ = m [x := fvar y] := by - revert k - induction m_lc - case abs xs t x_mem ih => - intros k + induction m_lc generalizing k with + | abs xs t => have ⟨x', _⟩ := fresh_exists <| free_union [fv] Var - simp only [closeRec_abs, openRec_abs, subst_abs] - rw [open_close x' (t⟦k+1 ↜ x⟧⟦k+1 ↝ fvar y⟧) 0 ?f₁, open_close x' (t[x := fvar y]) 0 ?f₂] - rw [swap_open_fvars, ←swap_open_fvar_close] <;> grind - case f₁ => grind [open_fresh_preserve_not_fvar, close_preserve_not_fvar] - case f₂ => grind [subst_preserve_not_fvar] - all_goals grind + grind [ + swap_open_fvars, =_ swap_open_fvar_close, + open_close x' (t⟦k+1 ↜ x⟧⟦k+1 ↝ fvar y⟧) 0, open_close x' (t[x := fvar y]) 0, + open_fresh_preserve_not_fvar, close_preserve_not_fvar, subst_preserve_not_fvar] + | _ => grind /-- Closing and opening are inverses. -/ lemma close_open (x : Var) (t : Term Var) (k : ℕ) (t_lc : LC t) : t⟦k ↜ x⟧⟦k ↝ fvar x⟧ = t := by - induction t_lc generalizing k - case abs xs t t_open_lc ih => - simp only [closeRec_abs, openRec_abs, abs.injEq] + induction t_lc generalizing k with + | abs _ t _ ih => let z := t⟦k + 1 ↜ x⟧⟦k + 1 ↝ fvar x⟧ have ⟨y, _⟩ := fresh_exists <| free_union [fv] Var - refine open_injective y _ _ ?_ ?_ ?f - case f => rw [←ih y ?_ (k+1)] <;> grind [swap_open_fvar_close, swap_open_fvars] - all_goals grind - all_goals grind + grind [ih y ?_ (k+1), open_injective, swap_open_fvar_close, swap_open_fvars] + | _ => grind end LambdaCalculus.LocallyNameless.Untyped.Term diff --git a/Cslib/Logics/HML/Basic.lean b/Cslib/Logics/HML/Basic.lean new file mode 100644 index 000000000..62c0a605f --- /dev/null +++ b/Cslib/Logics/HML/Basic.lean @@ -0,0 +1,266 @@ +/- +Copyright (c) 2026 Fabrizio Montesi. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Fabrizio Montesi, Marco Peressotti, Alexandre Rademaker +-/ + +module + +public import Cslib.Foundations.Semantics.LTS.Bisimulation + +@[expose] public section + +/-! # Hennessy-Milner Logic (HML) + +Hennessy-Milner Logic (HML) is a logic for reasoning about the behaviour of nondeterministic and +concurrent systems. + +## Implementation notes +There are two main versions of HML. The original [Hennessy1985], which includes a negation +connective, and a variation without negation, for example as in [Aceto1999]. +We follow the latter, which is used in many recent papers. Negation is recovered as usual, by having +a `false` atomic proposition and a function that, given any proposition, returns its negated form +(see `Proposition.neg`). + +## Main definitions + +- `Proposition`: the language of propositions. +- `Satisfies lts s a`: in the LTS `lts`, the state `s` satisfies the proposition `a`. +- `denotation a`: the denotation of a proposition `a`, defined as the set of states that +satisfy `a`. +- `theory lts s`: the set of all propositions satisfied by state `s` in the LTS `lts`. + +## Main statements + +- `satisfies_mem_denotation`: the denotational semantics of HML is correct, in the sense that it +coincides with the notion of satisfiability. +- `not_theoryEq_satisfies`: if two states have different theories, then there exists a +distinguishing proposition that one state satisfies and the other does not. +- `theoryEq_eq_bisimilarity`: two states have the same theory iff they are bisimilar +(see `Bisimilarity`). + +## References + +* [M. Hennessy, R. Milner, *Algebraic Laws for Nondeterminism and Concurrency*][Hennessy1985] +* [L. Aceto, A. Ingólfsdóttir, *Testing Hennessy-Milner Logic with Recursion*][Aceto1999] + +-/ + +namespace Cslib.Logic.HML + +/-- Propositions. -/ +inductive Proposition (Label : Type u) : Type u where + | true + | false + | and (φ₁ φ₂ : Proposition Label) + | or (φ₁ φ₂ : Proposition Label) + | diamond (μ : Label) (φ : Proposition Label) + | box (μ : Label) (φ : Proposition Label) + +/-- Negation of a proposition. -/ +@[simp, scoped grind =] +def Proposition.neg (a : Proposition Label) : Proposition Label := + match a with + | .true => .false + | .false => .true + | and a b => or a.neg b.neg + | or a b => and a.neg b.neg + | diamond μ a => box μ a.neg + | box μ a => diamond μ a.neg + +/-- Finite conjunction of propositions. -/ +@[simp, scoped grind =] +def Proposition.finiteAnd (as : List (Proposition Label)) : Proposition Label := + List.foldr .and .true as + +/-- Finite disjunction of propositions. -/ +@[simp, scoped grind =] +def Proposition.finiteOr (as : List (Proposition Label)) : Proposition Label := + List.foldr .or .false as + +/-- Satisfaction relation. `Satisfies lts s a` means that, in the LTS `lts`, the state `s` satisfies +the proposition `a`. -/ +@[scoped grind] +inductive Satisfies (lts : LTS State Label) : State → Proposition Label → Prop where + | true {s : State} : Satisfies lts s .true + | and {s : State} {a b : Proposition Label} : + Satisfies lts s a → Satisfies lts s b → + Satisfies lts s (.and a b) + | or₁ {s : State} {a b : Proposition Label} : + Satisfies lts s a → Satisfies lts s (.or a b) + | or₂ {s : State} {a b : Proposition Label} : + Satisfies lts s b → Satisfies lts s (.or a b) + | diamond {s s' : State} {μ : Label} {a : Proposition Label} + (htr : lts.Tr s μ s') (hs : Satisfies lts s' a) : Satisfies lts s (.diamond μ a) + | box {s : State} {μ : Label} {a : Proposition Label} + (h : ∀ s', lts.Tr s μ s' → Satisfies lts s' a) : + Satisfies lts s (.box μ a) + +/-- Denotation of a proposition. -/ +@[simp, scoped grind =] +def Proposition.denotation (a : Proposition Label) (lts : LTS State Label) + : Set State := + match a with + | .true => Set.univ + | .false => ∅ + | .and a b => a.denotation lts ∩ b.denotation lts + | .or a b => a.denotation lts ∪ b.denotation lts + | .diamond μ a => {s | ∃ s', lts.Tr s μ s' ∧ s' ∈ a.denotation lts} + | .box μ a => {s | ∀ s', lts.Tr s μ s' → s' ∈ a.denotation lts} + +/-- The theory of a state is the set of all propositions that it satifies. -/ +abbrev theory (lts : LTS State Label) (s : State) : Set (Proposition Label) := + {a | Satisfies lts s a} + +/-- Two states are theory-equivalent (for a specific LTS) if they have the same theory. -/ +abbrev TheoryEq (lts : LTS State Label) (s1 s2 : State) := + theory lts s1 = theory lts s2 + +open Proposition LTS Bisimulation Simulation + +/-- Characterisation theorem for the denotational semantics. -/ +@[scoped grind =] +theorem satisfies_mem_denotation {lts : LTS State Label} : + Satisfies lts s a ↔ s ∈ a.denotation lts := by + induction a generalizing s <;> grind + +/-- A state satisfies a proposition iff it does not satisfy the negation of the proposition. -/ +@[simp, scoped grind =] +theorem neg_satisfies {lts : LTS State Label} : + ¬Satisfies lts s a.neg ↔ Satisfies lts s a := by + induction a generalizing s <;> grind + +/-- A state is in the denotation of a proposition iff it is not in the denotation of the negation +of the proposition. -/ +@[scoped grind =] +theorem neg_denotation {lts : LTS State Label} (a : Proposition Label) : + s ∉ a.neg.denotation lts ↔ s ∈ a.denotation lts := by + grind [_=_ satisfies_mem_denotation] + +/-- A state satisfies a finite conjunction iff it satisfies all conjuncts. -/ +@[scoped grind =] +theorem satisfies_finiteAnd {lts : LTS State Label} {s : State} + {as : List (Proposition Label)} : + Satisfies lts s (Proposition.finiteAnd as) ↔ ∀ a ∈ as, Satisfies lts s a := by + induction as <;> grind + +/-- A state satisfies a finite disjunction iff it satisfies some disjunct. -/ +@[scoped grind =] +theorem satisfies_finiteOr {lts : LTS State Label} {s : State} + {as : List (Proposition Label)} : + Satisfies lts s (Proposition.finiteOr as) ↔ ∃ a ∈ as, Satisfies lts s a := by + induction as <;> grind + +@[scoped grind →] +theorem satisfies_theory (h : Satisfies lts s a) : a ∈ theory lts s := by + grind + +/-- Two states are theory-equivalent iff they are denotationally equivalent. -/ +theorem theoryEq_denotation_eq {lts : LTS State Label} : + TheoryEq lts s1 s2 ↔ + (∀ a : Proposition Label, s1 ∈ a.denotation lts ↔ s2 ∈ a.denotation lts) := by + grind [_=_ satisfies_mem_denotation] + +/-- If two states are not theory equivalent, there exists a distinguishing proposition. -/ +lemma not_theoryEq_satisfies (h : ¬ TheoryEq lts s1 s2) : + ∃ a, (Satisfies lts s1 a ∧ ¬Satisfies lts s2 a) := by + grind [=_ neg_satisfies] + +/-- If two states are theory equivalent and the former satisfies a proposition, the latter does as +well. -/ +theorem theoryEq_satisfies {lts : LTS State Label} (h : TheoryEq lts s1 s2) + (hs : Satisfies lts s1 a) : Satisfies lts s2 a := by + unfold TheoryEq theory at h + rw [Set.ext_iff] at h + exact (h a).mp hs + +section ImageToPropositions + +variable {lts : LTS State Label} (stateMap : lts.image s μ → Proposition Label) +variable [finImage : Fintype (lts.image s μ)] + +/-- The list of propositions over finite μ-derivatives. -/ +noncomputable def propositions : List (Proposition Label) := + finImage.elems.toList.map stateMap + +theorem propositions_complete (s' : lts.image s μ) : stateMap s' ∈ propositions stateMap := by + apply List.mem_map.mpr + use s', Finset.mem_toList.mpr (Fintype.complete s') + +theorem propositions_satisfies_conjunction (htr : lts.Tr s1 μ s1') + (hdist_spec : ∀ s2', Satisfies lts s1' (stateMap s2')) : + Satisfies lts s1 (.diamond μ <| Proposition.finiteAnd (propositions stateMap)) := by + apply Satisfies.diamond htr + rw [satisfies_finiteAnd] + intro a ha_mem + grind [List.mem_map.mp ha_mem] + +end ImageToPropositions + +/-- Theory equivalence is a bisimulation. -/ +@[scoped grind ⇒] +theorem theoryEq_isBisimulation (lts : LTS State Label) + [image_finite : ∀ s μ, Finite (lts.image s μ)] : + lts.IsBisimulation (TheoryEq lts) := by + intro s1 s2 h μ + let (s : State) := @Fintype.ofFinite (lts.image s μ) (image_finite s μ) + constructor + case left => + intro s1' htr + by_contra + have hdist : ∀ s2' : lts.image s2 μ, ∃ a, Satisfies lts s1' a ∧ ¬Satisfies lts s2'.val a := by + intro ⟨s2', hs2'⟩ + apply not_theoryEq_satisfies + grind + choose dist_formula hdist_spec using hdist + let conjunction := Proposition.finiteAnd (propositions dist_formula) + have hs1_diamond : Satisfies lts s1 (.diamond μ conjunction) := by + grind [propositions_satisfies_conjunction] + cases (theoryEq_satisfies h hs1_diamond) with | @diamond _ s2'' _ _ htr2 hsat => + grind [propositions_complete dist_formula ⟨s2'', htr2⟩] + case right => + -- Symmetric to left case + intro s2' htr + by_contra + have hdist : ∀ s1' : lts.image s1 μ, ∃ a, Satisfies lts s2' a ∧ ¬Satisfies lts s1'.val a := by + intro ⟨s1', hs1'⟩ + apply not_theoryEq_satisfies + grind + choose dist_formula hdist_spec using hdist + let conjunction := Proposition.finiteAnd (propositions dist_formula) + have hs2_diamond : Satisfies lts s2 (.diamond μ conjunction) := by + grind [propositions_satisfies_conjunction] + cases (theoryEq_satisfies h.symm hs2_diamond) with | @diamond _ s1'' _ _ htr1 hsat => + grind [propositions_complete dist_formula ⟨s1'', htr1⟩] + +/-- If two states are in a bisimulation and the former satisfies a proposition, the latter does as +well. -/ +@[scoped grind ⇒] +lemma bisimulation_satisfies {lts : LTS State Label} + {hrb : lts.IsBisimulation r} + (hr : r s1 s2) (a : Proposition Label) (hs : Satisfies lts s1 a) : + Satisfies lts s2 a := by + induction a generalizing s1 s2 with + | diamond => cases hs with | diamond htr _ => grind [hrb.follow_fst hr htr] + | _ => grind + +lemma bisimulation_TheoryEq {lts : LTS State Label} + {hrb : lts.IsBisimulation r} + (hr : r s1 s2) : + TheoryEq lts s1 s2 := by + have : s2 ~[lts] s1 := by grind [Bisimilarity.symm] + grind + +/-- Theory equivalence and bisimilarity coincide for image-finite LTSs. -/ +theorem theoryEq_eq_bisimilarity (lts : LTS State Label) + [image_finite : ∀ s μ, Finite (lts.image s μ)] : + TheoryEq lts = Bisimilarity lts := by + ext s1 s2 + apply Iff.intro <;> intro h + · exists TheoryEq lts + grind + · obtain ⟨r, hr, hrb⟩ := h + apply bisimulation_TheoryEq hr + exact hrb + +end Cslib.Logic.HML diff --git a/Cslib/Logics/LinearLogic/CLL/Basic.lean b/Cslib/Logics/LinearLogic/CLL/Basic.lean index 9f720bdbc..0cf7e1985 100644 --- a/Cslib/Logics/LinearLogic/CLL/Basic.lean +++ b/Cslib/Logics/LinearLogic/CLL/Basic.lean @@ -391,8 +391,7 @@ def tensor_assoc {a b c : Proposition Atom} : a ⊗ (b ⊗ c) ≡⇓ (a ⊗ b) show a⫠ ::ₘ b⫠ ::ₘ c⫠ ::ₘ {a ⊗ (b ⊗ c)} = ((a ⊗ (b ⊗ c)) ::ₘ {a⫠} + ({b⫠} + {c⫠})) by grind ▸ (.tensor .ax <| .tensor .ax .ax)⟩ -instance {Γ : Sequent Atom} : - IsSymm (Proposition Atom) (fun a b => Sequent.Provable ((a ⊗ b) ::ₘ Γ)) where +instance {Γ : Sequent Atom} : Std.Symm (fun a b => Sequent.Provable ((a ⊗ b) ::ₘ Γ)) where symm _ _ h := Sequent.Provable.fromProof (subst_eqv_head tensor_symm h.toProof) /-- ⊕ is idempotent. -/ diff --git a/Cslib/Logics/LinearLogic/CLL/PhaseSemantics/Basic.lean b/Cslib/Logics/LinearLogic/CLL/PhaseSemantics/Basic.lean index 971c43535..de64d08bd 100644 --- a/Cslib/Logics/LinearLogic/CLL/PhaseSemantics/Basic.lean +++ b/Cslib/Logics/LinearLogic/CLL/PhaseSemantics/Basic.lean @@ -11,6 +11,7 @@ public import Mathlib.Algebra.Group.Pointwise.Set.Basic public import Mathlib.Algebra.Group.Idempotent public import Mathlib.Data.Set.Basic public import Mathlib.Order.Closure +public import Mathlib.Order.Defs.PartialOrder public import Cslib.Logics.LinearLogic.CLL.Basic @[expose] public section @@ -157,6 +158,8 @@ instance : SetLike (Fact P) P where coe := Fact.carrier coe_injective' _ _ _ := by grind [cases Fact] +instance : PartialOrder (Fact P) := PartialOrder.ofSetLike (Fact P) P + instance : HasSubset (Fact P) := ⟨fun A B => (A : Set P) ⊆ (B : Set P)⟩ diff --git a/CslibTests.lean b/CslibTests.lean index e1aaeff0e..c1c44021a 100644 --- a/CslibTests.lean +++ b/CslibTests.lean @@ -1,12 +1,16 @@ -module +module -- shake: keep-all public import CslibTests.Bisimulation public import CslibTests.CCS +public import CslibTests.CLL public import CslibTests.DFA public import CslibTests.FreeMonad public import CslibTests.GrindLint +public import CslibTests.HML public import CslibTests.HasFresh public import CslibTests.ImportWithMathlib public import CslibTests.LTS public import CslibTests.LambdaCalculus -public import CslibTests.ReductionSystem +public import CslibTests.QueryModel.ProgExamples +public import CslibTests.QueryModel.QueryExamples +public import CslibTests.Reduction diff --git a/CslibTests/CLL.lean b/CslibTests/CLL.lean new file mode 100644 index 000000000..4864a9299 --- /dev/null +++ b/CslibTests/CLL.lean @@ -0,0 +1,218 @@ +/- +Copyright (c) 2025 Alexandre Rademaker. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Alexandre Rademaker +-/ + +import Cslib.Logics.LinearLogic.CLL.Basic + +namespace CslibTests + +/-! # Tests for Classical Linear Logic + +I use `Proposition Nat` as the concrete instantiation for atoms. +-/ + +open Cslib.CLL + +/-! ## Proposition construction tests -/ + +-- Define some atomic propositions for testing +abbrev P := Proposition Nat + +def a : P := .atom 0 +def a' : P := .atomDual 0 +def b : P := .atom 1 +def c : P := .atom 2 + +-- Test that notations work correctly +example : P := a ⊗ b -- tensor +example : P := a ⅋ b -- parr +example : P := a ⊕ b -- oplus +example : P := a & b -- with +example : P := !a -- bang +example : P := ʔa -- quest +example : P := (1 : P) -- one +example : P := (0 : P) -- zero +example : P := (⊤ : P) -- top +example : P := (⊥ : P) -- bot +example : P := a ⊸ b -- linear implication + +-- Test nested propositions +example : P := (a ⊗ b) ⅋ c +example : P := !(a ⊗ b) +example : P := a ⊕ (b & c) + +/-! ## Duality tests -/ + +-- dual_involution: a⫠⫠ = a +example : (a⫠⫠ : P) = a := Proposition.dual_involution a +example : (a ⊗ b)⫠⫠ = (a ⊗ b : P) := Proposition.dual_involution (a ⊗ b) +example : (!a)⫠⫠ = (!a : P) := Proposition.dual_involution (!a) + +-- Dual of specific propositions +example : (a⫠ : P) = a' := rfl +example : ((1 : P)⫠) = ⊥ := rfl +example : ((0 : P)⫠) = ⊤ := rfl +example : (a ⊗ b)⫠ = (a⫠ ⅋ b⫠ : P) := rfl +example : (a ⅋ b)⫠ = (a⫠ ⊗ b⫠ : P) := rfl +example : (a ⊕ b)⫠ = (a⫠ & b⫠ : P) := rfl +example : (a & b)⫠ = (a⫠ ⊕ b⫠ : P) := rfl +example : (!a)⫠ = (ʔ(a⫠) : P) := rfl +example : (ʔa)⫠ = (!(a⫠) : P) := rfl + +-- dual_neq: no proposition equals its dual +example : a ≠ a⫠ := Proposition.dual_neq a +example : (1 : P) ≠ (1 : P)⫠ := Proposition.dual_neq 1 + +-- dual_inj: duality is injective +example : (a⫠ = b⫠) ↔ (a = b) := Proposition.dual_inj a b + + +/-! ## Basic proof tests -/ + +-- Axiom: ⊢ a, a⫠ +example : ⇓({a, a⫠} : Sequent Nat) := Proof.ax +example : ⇓({a⫠, a} : Sequent Nat) := Proof.ax' + +-- One: ⊢ 1 +example : ⇓({1} : Sequent Nat) := Proof.one + +-- Top: ⊢ ⊤, Γ +example : ⇓({⊤} : Sequent Nat) := Proof.top +example : ⇓(⊤ ::ₘ {a} : Sequent Nat) := Proof.top + +-- Bot: from ⊢ Γ derive ⊢ ⊥, Γ +example : ⇓(⊥ ::ₘ {1} : Sequent Nat) := Proof.bot Proof.one + +-- Parr: from ⊢ a, b, Γ derive ⊢ a ⅋ b, Γ +example : ⇓({a ⅋ a⫠} : Sequent Nat) := Proof.parr Proof.ax + +-- Tensor: from ⊢ a, Γ and ⊢ b, Δ derive ⊢ a ⊗ b, Γ, Δ +example : ⇓({a ⊗ b, a⫠, b⫠} : Sequent Nat) := Proof.tensor Proof.ax Proof.ax + +-- Oplus: from ⊢ a, Γ derive ⊢ a ⊕ b, Γ +example : ⇓({a ⊕ b, a⫠} : Sequent Nat) := Proof.oplus₁ Proof.ax +example : ⇓({a ⊕ b, b⫠} : Sequent Nat) := Proof.oplus₂ Proof.ax + +-- With: from ⊢ a, Γ and ⊢ b, Γ derive ⊢ a & b, Γ +example : ⇓({a & a, a⫠} : Sequent Nat) := Proof.with Proof.ax Proof.ax + +-- Quest: from ⊢ a, Γ derive ⊢ ʔa, Γ +example : ⇓({ʔa, a⫠} : Sequent Nat) := Proof.quest Proof.ax + +-- Weaken: from ⊢ Γ derive ⊢ ʔa, Γ +example : ⇓({ʔa, 1} : Sequent Nat) := Proof.weaken Proof.one + + +/-! ## Logical equivalence tests (proof-irrelevant) -/ + +-- Reflexivity +example : (a : P) ≡ a := Proposition.Equiv.refl a + +-- Symmetry +example (h : (a : P) ≡ b) : b ≡ a := Proposition.Equiv.symm h + +-- Transitivity +example (hab : (a : P) ≡ b) (hbc : b ≡ c) : a ≡ c := Proposition.Equiv.trans hab hbc + +-- Coercion from proof-relevant to proof-irrelevant (via .toProp) +example : (!⊤ : P) ≡ 1 := Proposition.bang_top_eqv_one.toProp +example : (ʔ0 : P) ≡ ⊥ := Proposition.quest_zero_eqv_bot.toProp +example : (a ⊗ 0 : P) ≡ 0 := (Proposition.tensor_zero_eqv_zero a).toProp +example : (a ⅋ ⊤ : P) ≡ ⊤ := (Proposition.parr_top_eqv_top a).toProp +example : (a ⊗ b : P) ≡ b ⊗ a := Proposition.tensor_symm.toProp +example : (a ⊗ (b ⊗ c) : P) ≡ (a ⊗ b) ⊗ c := Proposition.tensor_assoc.toProp +example : (a ⊗ (b ⊕ c) : P) ≡ (a ⊗ b) ⊕ (a ⊗ c) := (Proposition.tensor_distrib_oplus a b c).toProp +example : (a ⊕ a : P) ≡ a := Proposition.oplus_idem.toProp +example : (a & a : P) ≡ a := Proposition.with_idem.toProp + + +/-! ## Proof-relevant equivalence tests -/ + +-- equiv.refl +example : (a : P) ≡⇓ a := Proposition.equiv.refl a + +-- equiv.symm +example (h : (a : P) ≡⇓ b) : b ≡⇓ a := Proposition.equiv.symm a h + +-- equiv.trans +example (hab : (a : P) ≡⇓ b) (hbc : (b : P) ≡⇓ c) : a ≡⇓ c := Proposition.equiv.trans hab hbc + +-- Proof-relevant versions of logical equivalences +example : (!⊤ : P) ≡⇓ 1 := Proposition.bang_top_eqv_one +example : (ʔ0 : P) ≡⇓ ⊥ := Proposition.quest_zero_eqv_bot +example : (a ⊗ b : P) ≡⇓ b ⊗ a := Proposition.tensor_symm +example : (a ⊗ (b ⊗ c) : P) ≡⇓ (a ⊗ b) ⊗ c := Proposition.tensor_assoc +example : (a ⊕ a : P) ≡⇓ a := Proposition.oplus_idem +example : (a & a : P) ≡⇓ a := Proposition.with_idem + + +/-! ## Inversion tests -/ + +-- parr_inversion +example (h : ⇓({a ⅋ b} : Sequent Nat)) : ⇓({a, b} : Sequent Nat) := Proof.parr_inversion h + +-- bot_inversion +example (h : ⇓({⊥, 1} : Sequent Nat)) : ⇓({1} : Sequent Nat) := Proof.bot_inversion h + +-- with_inversion +example (h : ⇓({a & b} : Sequent Nat)) : ⇓({a} : Sequent Nat) := Proof.with_inversion₁ h +example (h : ⇓({a & b} : Sequent Nat)) : ⇓({b} : Sequent Nat) := Proof.with_inversion₂ h + + +/-! ## Positive/Negative classification tests -/ + +-- Positive propositions +example : Proposition.positive a = true := rfl +example : Proposition.positive (1 : P) = true := rfl +example : Proposition.positive (0 : P) = true := rfl +example : Proposition.positive (a ⊗ b) = true := rfl +example : Proposition.positive (a ⊕ b) = true := rfl +example : Proposition.positive (!a) = true := rfl + +-- Negative propositions +example : Proposition.negative (Proposition.atomDual 0 : P) = true := rfl +example : Proposition.negative (⊥ : P) = true := rfl +example : Proposition.negative (⊤ : P) = true := rfl +example : Proposition.negative (a ⅋ b) = true := rfl +example : Proposition.negative (a & b) = true := rfl +example : Proposition.negative (ʔa) = true := rfl + + +/-! ## linear logic proofs tests -/ + +/-- Example 37 Figure 5 from https://arxiv.org/abs/1904.06850 + +B ⊢ (!(A ⊸ B) ⊸ B) ⊗ (B ⊸ (!A ⊸ B)) + +This translates to the sequent: + +⊢ B⫠, (!(A ⊸ B) ⊸ B) ⊗ (B ⊸ (!A ⊸ B)) + +Breaking down the formula: + + A ⊸ B = A⫠ ⅋ B (linear implication) + !(A ⊸ B) = !(A⫠ ⅋ B) + (!(A ⊸ B))⫠ = ʔ((A⫠ ⅋ B)⫠) = ʔ(A ⊗ B⫠) + !(A ⊸ B) ⊸ B = (!(A ⊸ B))⫠ ⅋ B = ʔ(A ⊗ B⫠) ⅋ B + !A ⊸ B = (!A)⫠ ⅋ B = ʔA⫠ ⅋ B + B ⊸ (!A ⊸ B) = B⫠ ⅋ (ʔA⫠ ⅋ B) +-/ +example : ⇓({b⫠, (!(a ⊸ b) ⊸ b) ⊗ (b ⊸ (!a ⊸ b))} : Sequent Nat) := by + apply Proof.rwConclusion (Multiset.pair_comm ..) + -- tensor rule, We need Γ + Δ = {b⫠}, so Γ = {b⫠} and Δ = {} + apply Proof.tensor (Γ := {b⫠}) (Δ := {}) + · -- !(a ⊸ b) ⊸ b = ʔ(a ⊗ b⫠) ⅋ b + apply Proof.parr -- Apply parr to get: ⊢ ʔ(a ⊗ b⫠), b, b⫠ + apply Proof.weaken + apply Proof.ax + · -- b ⊸ (!a ⊸ b) = b⫠ ⅋ (ʔa⫠ ⅋ b) + apply Proof.parr -- Apply parr to get: ⊢ b⫠, ʔa⫠ ⅋ b + apply Proof.rwConclusion (Multiset.pair_comm ..) + apply Proof.parr -- Apply parr to get: ⊢ b⫠, ʔa⫠, b + apply Proof.weaken + exact Proof.ax + + +end CslibTests diff --git a/CslibTests/GrindLint.lean b/CslibTests/GrindLint.lean index 6eb56e31f..67673a40b 100644 --- a/CslibTests/GrindLint.lean +++ b/CslibTests/GrindLint.lean @@ -35,6 +35,7 @@ open_scoped_all Cslib #grind_lint skip Cslib.FinFun.fromFun_inter #grind_lint skip Cslib.LTS.deterministic_not_lto #grind_lint skip Cslib.LTS.deterministic_tr_image_singleton +#grind_lint skip Cslib.LTS.IsExecution.refl #grind_lint skip Cslib.LTS.mem_saturate_image_τ #grind_lint skip Cslib.ωSequence.drop_const #grind_lint skip Cslib.ωSequence.get_cons_append_zero @@ -68,9 +69,13 @@ open_scoped_all Cslib #grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Fsub.Typing.sub #grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Fsub.Typing.tapp #grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Untyped.Term.para_subst +#grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Untyped.Term.FullBeta.redex_app_l_cong +#grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Untyped.Term.FullBeta.redex_app_r_cong #grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Untyped.Term.subst_intro #grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Fsub.Env.Wf.sub #grind_lint skip Cslib.LambdaCalculus.LocallyNameless.Fsub.Env.Wf.ty +#grind_lint skip Cslib.Logic.HML.bisimulation_satisfies +#grind_lint skip Cslib.Logic.HML.Satisfies.diamond #guard_msgs in #grind_lint check (min := 20) in Cslib diff --git a/CslibTests/HML.lean b/CslibTests/HML.lean new file mode 100644 index 000000000..01b09159b --- /dev/null +++ b/CslibTests/HML.lean @@ -0,0 +1,19 @@ +/- +Copyright (c) 2026 Fabrizio Montesi. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Fabrizio Montesi +-/ + +import Cslib.Logics.HML.Basic +import Cslib.Languages.CCS.Semantics + +namespace CslibTests + +open Cslib +open CCS Logic.HML + +example [∀ p μ, Finite ((CCS.lts (defs := defs)).image p μ)] : + TheoryEq (CCS.lts (defs := defs)) = Bisimilarity (CCS.lts (defs := defs)) := + theoryEq_eq_bisimilarity .. + +end CslibTests diff --git a/CslibTests/LambdaCalculus.lean b/CslibTests/LambdaCalculus.lean index 4affacf00..1a331ea58 100644 --- a/CslibTests/LambdaCalculus.lean +++ b/CslibTests/LambdaCalculus.lean @@ -21,7 +21,7 @@ example : (abs 0 (var 0)) =α (abs 1 (var 1)) := by simp [Term.fv] example : (abs 1 (var 0)).subst 0 (app (var 1) (var 2)) = (abs 3 (app (var 1) (var 2))) := by - simp [subst, fv, bv, vars, rename, instHasFreshNat, HasFresh.ofSucc] + simp +instances [subst, fv, bv, vars, rename, instHasFreshNat, HasFresh.ofSucc] def x := 0 def y := 1 @@ -34,7 +34,8 @@ local instance coeNatTerm : Coe ℕ (Term ℕ) := ⟨Term.var⟩ -- section 5.3.4 of TAPL example : (abs y (app x y))[x := (app y z : Term ℕ)] = (abs w (app (app y z) w)) := by - simp [subst, fv, bv, vars, rename, instHasFreshNat, HasFresh.ofSucc, instHasSubstitutionTerm] + simp +instances [subst, fv, bv, vars, rename, instHasFreshNat, HasFresh.ofSucc, + instHasSubstitutionTerm] -- example : (abs 0 (abs 1 (app (var 0) (var 1)))) =α (abs 1 (abs 0 (app (var 1) (var 0)))) := by diff --git a/CslibTests/QueryModel/ProgExamples.lean b/CslibTests/QueryModel/ProgExamples.lean new file mode 100644 index 000000000..18c70e985 --- /dev/null +++ b/CslibTests/QueryModel/ProgExamples.lean @@ -0,0 +1,122 @@ +/- +Copyright (c) 2025 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel + +@[expose] public section + +namespace Cslib + +namespace Algorithms + +namespace Prog + +section ProgExamples + +inductive Arith (α : Type u) : Type u → Type _ where + | add (x y : α) : Arith α α + | mul (x y : α) : Arith α α + | neg (x : α) : Arith α α + | zero : Arith α α + | one : Arith α α + +def Arith.natCost [Ring α] : Model (Arith α) ℕ where + evalQuery + | .add x y => x + y + | .mul x y => x * y + | .neg x => -x + | .zero => 0 + | .one => 1 + cost _ := 1 + +open Arith in +def ex1 : Prog (Arith α) α := do + let mut x : α ← @zero α + let mut y ← @one α + let z ← (add x y) + let w ← @neg α (← add z y) + add w z + +/-- The array version of the sort operations. -/ +inductive VecSortOps.{u} (α : Type u) : Type u → Type _ where + | swap (a : Vector α n) (i j : Fin n) : VecSortOps α (Vector α n) + -- Note that we have to ULift the result to fit this in the same universe as the other types. + -- We can avoid this only by forcing everything to be in `Type 0`. + | cmp (a : Vector α n) (i j : Fin n) : VecSortOps α (ULift Bool) + | write (a : Vector α n) (i : Fin n) (x : α) : VecSortOps α (Vector α n) + | read (a : Vector α n) (i : Fin n) : VecSortOps α α + | push (a : Vector α n) (elem : α) : VecSortOps α (Vector α (n + 1)) + +/-- The typical means of evaluating a `VecSortOps`. -/ +@[simp] +def VecSortOps.eval [BEq α] : VecSortOps α β → β + | .write v i x => v.set i x + | .cmp l i j => .up <| l[i] == l[j] + | .read l i => l[i] + | .swap l i j => l.swap i j + | .push a elem => a.push elem + +@[simps] +def VecSortOps.worstCase [DecidableEq α] : Model (VecSortOps α) ℕ where + evalQuery := VecSortOps.eval + cost + | .write _ _ _ => 1 + | .read _ _ => 1 + | .cmp _ _ _ => 1 + | .swap _ _ _ => 1 + | .push _ _ => 2 -- amortized over array insertion and resizing by doubling + +@[simps] +def VecSortOps.cmpSwap [DecidableEq α] : Model (VecSortOps α) ℕ where + evalQuery := VecSortOps.eval + cost + | .cmp _ _ _ => 1 + | .swap _ _ _ => 1 + | _ => 0 + +open VecSortOps in +def simpleExample (v : Vector ℤ n) (i k : Fin n) : + Prog (VecSortOps ℤ) (Vector ℤ (n + 1)) := do + let b : Vector ℤ n ← write v i 10 + let mut c : Vector ℤ n ← swap b i k + let elem ← read c i + push c elem + +inductive VecSearch (α : Type u) : Type → Type _ where + | compare (a : Vector α n) (i : ℕ) (val : α) : VecSearch α Bool + +@[simps] +def VecSearch.nat [DecidableEq α] : Model (VecSearch α) ℕ where + evalQuery + | .compare l i x => l[i]? == some x + cost + | .compare _ _ _ => 1 + +open VecSearch in +def linearSearchAux (v : Vector α n) + (x : α) (acc : Bool) (index : ℕ) : Prog (VecSearch α) Bool := do + if h : index ≥ n then + return acc + else + let cmp_res : Bool ← compare v index x + if cmp_res then + return true + else + linearSearchAux v x false (index + 1) + +open VecSearch in +def linearSearch (v : Vector α n) (x : α) : Prog (VecSearch α) Bool:= + linearSearchAux v x false 0 + +end ProgExamples + +end Prog + +end Algorithms + +end Cslib diff --git a/CslibTests/QueryModel/QueryExamples.lean b/CslibTests/QueryModel/QueryExamples.lean new file mode 100644 index 000000000..6d9b11c41 --- /dev/null +++ b/CslibTests/QueryModel/QueryExamples.lean @@ -0,0 +1,77 @@ +/- +Copyright (c) 2025 Shreyas Srinivas. All rights reserved. +Released under Apache 2.0 license as described in the file LICENSE. +Authors: Shreyas Srinivas +-/ + +module + +public import Cslib.AlgorithmsTheory.QueryModel + + +@[expose] public section + +namespace Cslib + +namespace Algorithms + +section Examples + +/-- +ListOps provides an example of list query type equipped with a `find` query. +The complexity of this query depends on the search algorithm used. This means +we can define two separate models for modelling situations where linear search +or binary search is used. +-/ +inductive ListOps (α : Type u) : Type u → Type _ where + | get (l : List α) (i : Fin l.length) : ListOps α α + | find (l : List α) (elem : α) : ListOps α (ULift ℕ) + | write (l : List α) (i : Fin l.length) (x : α) : ListOps α (List α) + +/-- The typical means of evaluating a `ListOps`. -/ +@[simp] +def ListOps.eval [BEq α] : ListOps α ι → ι + | .write l i x => l.set i x + | .find l elem => l.findIdx (· == elem) + | .get l i => l[i] + +@[simps] +def ListOps.linSearchWorstCase [DecidableEq α] : Model (ListOps α) ℕ where + evalQuery := ListOps.eval + cost + | .write l _ _ => l.length + | .find l _ => l.length + | .get l _ => l.length + +def ListOps.binSearchWorstCase [BEq α] : Model (ListOps α) ℕ where + evalQuery := ListOps.eval + cost + | .find l _ => 1 + Nat.log 2 (l.length) + | .write l _ _ => l.length + | .get l _ => l.length + +inductive ArrayOps (α : Type u) : Type u → Type _ where + | get (l : Array α) (i : Fin l.size) : ArrayOps α α + | find (l : Array α) (x : α) : ArrayOps α (ULift ℕ) + | write (l : Array α) (i : Fin l.size) (x : α) : ArrayOps α (Array α) + +/-- The typical means of evaluating a `ListOps`. -/ +@[simp] +def ArrayOps.eval [BEq α] : ArrayOps α ι → ι + | .write l i x => l.set i x + | .find l elem => l.findIdx (· == elem) + | .get l i => l[i] + +@[simps] +def ArrayOps.binSearchWorstCase [BEq α] : Model (ArrayOps α) ℕ where + evalQuery := ArrayOps.eval + cost + | .find l _ => 1 + Nat.log 2 (l.size) + | .write _ _ _ => 1 + | .get _ _ => 1 + +end Examples + +end Algorithms + +end Cslib diff --git a/CslibTests/ReductionSystem.lean b/CslibTests/Reduction.lean similarity index 69% rename from CslibTests/ReductionSystem.lean rename to CslibTests/Reduction.lean index 4f2a71d9c..46755bf87 100644 --- a/CslibTests/ReductionSystem.lean +++ b/CslibTests/Reduction.lean @@ -1,45 +1,42 @@ -import Cslib.Foundations.Semantics.ReductionSystem.Basic +import Cslib.Foundations.Data.Relation namespace CslibTests open Cslib -@[reduction_sys rs "ₙ", simp] +@[reduction_sys "ₙ", grind] def PredReduction (a b : ℕ) : Prop := a = b + 1 lemma single_step : 5 ⭢ₙ 4 := by - change PredReduction _ _ - simp + grind -- `Trans` instances allow us to switch between single and multistep reductions in a `calc` block lemma multiple_step : 5 ↠ₙ 1 := by -- TODO: can/should this be a `simp` attribute somewhere? - have h : rs.Red = PredReduction := by rfl calc - 5 ⭢ₙ 4 := by simp [h] + 5 ⭢ₙ 4 := by grind _ ↠ₙ 2 := by calc - 4 ↠ₙ 3 := by apply ReductionSystem.MRed.single; simp [h] - _ ⭢ₙ 2 := by simp [h] - _ ⭢ₙ 1 := by simp [h] + 4 ↠ₙ 3 := by grind + _ ⭢ₙ 2 := by grind + _ ⭢ₙ 1 := by grind -- ensure that this still works when there are variables inductive Term (Var : Type) variable {Var : Type} -@[reduction_sys rs' "β", simp] +@[reduction_sys "β", simp] def term_rel : Term Var → Term Var → Prop := fun _ _ ↦ True example (a b : Term Var) : a ⭢β b := by - change (@term_rel Var) a b simp -- check that a "cannonical" notation also works -attribute [reduction_sys cannonical_rs] PredReduction +@[reduction_sys, grind] +def PredReduction' (a b : ℕ) : Prop := a = b + 1 example : 5 ⭢ 4 := by - change PredReduction _ _ - simp + grind --check that namespaces are respected @@ -47,10 +44,6 @@ example : 5 ⭢ 4 := by #guard_msgs in #check CslibTests.PredReduction -/-- info: CslibTests.rs : ReductionSystem ℕ -/ -#guard_msgs in -#check CslibTests.rs - -- check that delaborators work, including with variables /-- info: ∀ (a b : ℕ), a ⭢ₙ b : Prop -/ diff --git a/ORGANISATION.md b/ORGANISATION.md index cbf226802..a2c4edfca 100644 --- a/ORGANISATION.md +++ b/ORGANISATION.md @@ -16,7 +16,6 @@ This document gives an overview of how the codebase is structured, in terms of d - Free. Free monads. - Semantics. Operational semantics (reduction and transition systems), program equivalences, etc. - Lts. - - ReductionSystem. - Bisimilarity. - TraceEq. - … diff --git a/lake-manifest.json b/lake-manifest.json index 8820e5427..fda569a0c 100644 --- a/lake-manifest.json +++ b/lake-manifest.json @@ -5,17 +5,17 @@ "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "32d24245c7a12ded17325299fd41d412022cd3fe", + "rev": "33a7291a345d718bca41f8ae8a9bae365d8f2a3e", "name": "mathlib", "manifestFile": "lake-manifest.json", - "inputRev": "v4.27.0-rc1", + "inputRev": "master", "inherited": false, "configFile": "lakefile.lean"}, {"url": "https://github.com/leanprover-community/plausible", "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "8d3713f36dda48467eb61f8c1c4db89c49a6251a", + "rev": "a8191ec244102f576a8cd93399cdd8cc489e47cd", "name": "plausible", "manifestFile": "lake-manifest.json", "inputRev": "main", @@ -25,7 +25,7 @@ "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "19e5f5cc9c21199be466ef99489e3acab370f079", + "rev": "c5d5b8fe6e5158def25cd28eb94e4141ad97c843", "name": "LeanSearchClient", "manifestFile": "lake-manifest.json", "inputRev": "main", @@ -35,7 +35,7 @@ "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "4eb26e1a4806b200ddfe5179d0c2a0fae56c54a7", + "rev": "7ccd5e026eb2b3581915ff3f0d1cd918e18c2ab9", "name": "importGraph", "manifestFile": "lake-manifest.json", "inputRev": "main", @@ -45,17 +45,17 @@ "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "ef8377f31b5535430b6753a974d685b0019d0681", + "rev": "06c949a3f4a3b2eb0bd8601e31269b9f4f820aa6", "name": "proofwidgets", "manifestFile": "lake-manifest.json", - "inputRev": "v0.0.84", + "inputRev": "v0.0.88", "inherited": true, "configFile": "lakefile.lean"}, {"url": "https://github.com/leanprover-community/aesop", "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "fb12f5535c80e40119286d9575c9393562252d21", + "rev": "eb165126bfb2988738792c9ae37e09d58e2fec83", "name": "aesop", "manifestFile": "lake-manifest.json", "inputRev": "master", @@ -65,7 +65,7 @@ "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "523ec6fc8062d2f470fdc8de6f822fe89552b5e6", + "rev": "90217e10b2db6c6c445d30faae7fea453d9782c0", "name": "Qq", "manifestFile": "lake-manifest.json", "inputRev": "master", @@ -75,7 +75,7 @@ "type": "git", "subDir": null, "scope": "leanprover-community", - "rev": "6254bed25866358ce4f841fa5a13b77de04ffbc8", + "rev": "965500ff9171556698ec2714b936739d5da438c2", "name": "batteries", "manifestFile": "lake-manifest.json", "inputRev": "main", @@ -85,10 +85,10 @@ "type": "git", "subDir": null, "scope": "leanprover", - "rev": "726b98c53e2da249c1de768fbbbb5e67bc9cef60", + "rev": "474983579ecce1ca7d8a63e65c7ae0b1a22db6a3", "name": "Cli", "manifestFile": "lake-manifest.json", - "inputRev": "v4.27.0-rc1", + "inputRev": "v4.29.0-rc1", "inherited": true, "configFile": "lakefile.toml"}], "name": "cslib", diff --git a/lakefile.toml b/lakefile.toml index 377c3de5a..81ea3e7d8 100644 --- a/lakefile.toml +++ b/lakefile.toml @@ -16,7 +16,6 @@ weak.linter.allScriptsDocumented = false [[require]] name = "mathlib" scope = "leanprover-community" -rev = "v4.27.0-rc1" [[lean_lib]] name = "Cslib" diff --git a/lean-toolchain b/lean-toolchain index bd19bde0c..c7ad81a70 100644 --- a/lean-toolchain +++ b/lean-toolchain @@ -1 +1 @@ -leanprover/lean4:v4.27.0-rc1 +leanprover/lean4:v4.29.0-rc1 diff --git a/references.bib b/references.bib index 551d6819f..3ad80fe85 100644 --- a/references.bib +++ b/references.bib @@ -1,3 +1,24 @@ +@inproceedings{Aceto1999, + author = {Luca Aceto and + Anna Ing{\'{o}}lfsd{\'{o}}ttir}, + editor = {Wolfgang Thomas}, + title = {Testing Hennessy-Milner Logic with Recursion}, + booktitle = {Foundations of Software Science and Computation Structure, Second + International Conference, FoSSaCS'99, Held as Part of the European + Joint Conferences on the Theory and Practice of Software, ETAPS'99, + Amsterdam, The Netherlands, March 22-28, 1999, Proceedings}, + series = {Lecture Notes in Computer Science}, + volume = {1578}, + pages = {41--55}, + publisher = {Springer}, + year = {1999}, + url = {https://doi.org/10.1007/3-540-49019-1\_4}, + doi = {10.1007/3-540-49019-1\_4}, + timestamp = {Tue, 14 May 2019 10:00:55 +0200}, + biburl = {https://dblp.org/rec/conf/fossacs/AcetoI99.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + @book{Baader1998, author = {Baader, Franz and Nipkow, Tobias}, title = {Term rewriting and all that}, @@ -86,6 +107,22 @@ @inbook{ Girard1995 collection={London Mathematical Society Lecture Note Series} } +@article{ Hennessy1985, + author = {Matthew Hennessy and + Robin Milner}, + title = {Algebraic Laws for Nondeterminism and Concurrency}, + journal = {J. {ACM}}, + volume = {32}, + number = {1}, + pages = {137--161}, + year = {1985}, + url = {https://doi.org/10.1145/2455.2460}, + doi = {10.1145/2455.2460}, + timestamp = {Tue, 06 Nov 2018 12:51:45 +0100}, + biburl = {https://dblp.org/rec/journals/jacm/HennessyM85.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + @inproceedings{ Kiselyov2015, author = {Kiselyov, Oleg and Ishii, Hiromi}, title = {Freer Monads, More Extensible Effects}, @@ -195,3 +232,25 @@ @incollection{ Thomas1990 publisher = {Elsevier and {MIT} Press}, year = {1990} } + +@book{ Cutland1980, + author = {Cutland, Nigel J.}, + title = {Computability: An Introduction to Recursive Function Theory}, + year = {1980}, + publisher = {Cambridge University Press}, + address = {Cambridge}, + isbn = {978-0-521-29465-2} +} + +@article{ ShepherdsonSturgis1963, + author = {Shepherdson, J. C. and Sturgis, H. E.}, + title = {Computability of Recursive Functions}, + journal = {Journal of the ACM}, + volume = {10}, + number = {2}, + year = {1963}, + pages = {217--255}, + doi = {10.1145/321160.321170}, + publisher = {Association for Computing Machinery}, + address = {New York, NY, USA} +} diff --git a/scripts/bench/README.md b/scripts/bench/README.md new file mode 100644 index 000000000..2eca796a4 --- /dev/null +++ b/scripts/bench/README.md @@ -0,0 +1,25 @@ +# Cslib benchmark suite + +This directory contains the cslib benchmark suite. +It is built around [radar](github.com/leanprover/radar) +and benchmark results can be viewed +on the [Lean FRO radar instance](https://radar.lean-lang.org/repos/cslib). + +To execute the entire suite, run `scripts/bench/run` in the repo root. +To execute an individual benchmark, run `scripts/bench//run` in the repo root. +All scripts output their measurements into the file `measurements.jsonl`. + +Radar sums any duplicated measurements with matching metrics. +To post-process the `measurements.jsonl` file this way in-place, +run `scripts/bench/combine.py` in the repo root after executing the benchmark suite. + +The `*.py` symlinks exist only so the python files are a bit nicer to edit +in text editors that rely on the file ending. + +## Adding a benchmark + +To add a benchmark to the suite, follow these steps: + +1. Create a new folder containing a `run` script and a `README.md` file describing the benchmark, + as well as any other files required for the benchmark. +2. Edit `scripts/bench/run` to call the `run` script of your new benchmark. diff --git a/scripts/bench/build/README.md b/scripts/bench/build/README.md new file mode 100644 index 000000000..5292e925d --- /dev/null +++ b/scripts/bench/build/README.md @@ -0,0 +1,27 @@ +# The `build` benchmark + +This benchmark executes a complete build of cslib and collects global and per-module metrics. + +The following metrics are collected by a wrapper around the entire build process: + +- `build//instructions` +- `build//maxrss` +- `build//task-clock` +- `build//wall-clock` + +The following metrics are collected from `leanc --profile` and summed across all modules: + +- `build/profile///wall-clock` + +The following metrics are collected from `lakeprof report`: + +- `build/lakeprof/longest build path//wall-clock` +- `build/lakeprof/longest rebuild path//wall-clock` + +The following metrics are collected individually for each module: + +- `build/module///lines` +- `build/module///instructions` + +If the file `build_upload_lakeprof_report` is present in the repo root, +the lakeprof report will be uploaded once the benchmark run concludes. diff --git a/scripts/bench/build/fake-root/bin/lean b/scripts/bench/build/fake-root/bin/lean new file mode 100755 index 000000000..8d2b778e6 --- /dev/null +++ b/scripts/bench/build/fake-root/bin/lean @@ -0,0 +1,94 @@ +#!/usr/bin/env python3 + +import argparse +import json +import re +import subprocess +import sys +from pathlib import Path + +NAME = "build" +REPO = Path() +BENCH = REPO / "scripts" / "bench" +OUTFILE = REPO / "measurements.jsonl" + + +def save_result(metric: str, value: float, unit: str | None = None) -> None: + data = {"metric": metric, "value": value} + if unit is not None: + data["unit"] = unit + with open(OUTFILE, "a+") as f: + f.write(f"{json.dumps(data)}\n") + + +def run(*command: str) -> None: + result = subprocess.run(command) + if result.returncode != 0: + sys.exit(result.returncode) + + +def run_stderr(*command: str) -> str: + result = subprocess.run(command, capture_output=True, encoding="utf-8") + if result.returncode != 0: + print(result.stdout, end="", file=sys.stdout) + print(result.stderr, end="", file=sys.stderr) + sys.exit(result.returncode) + return result.stderr + + +def get_module(setup: Path) -> str: + with open(setup) as f: + return json.load(f)["name"] + + +def count_lines(module: str, path: Path) -> None: + with open(path) as f: + lines = sum(1 for _ in f) + save_result(f"{NAME}/module/{module}//lines", lines) + + +def run_lean(module: str) -> None: + stderr = run_stderr( + f"{BENCH}/measure.py", + *("-t", f"{NAME}/module/{module}"), + *("-m", "instructions"), + "--", + *("lean", "--profile", "-Dprofiler.threshold=9999999"), + *sys.argv[1:], + ) + + for line in stderr.splitlines(): + # Output of `lean --profile` + # See timeit.cpp for the time format + if match := re.fullmatch(r"\t(.*) ([\d.]+)(m?s)", line): + name = match.group(1) + seconds = float(match.group(2)) + if match.group(3) == "ms": + seconds = seconds / 1000 + save_result(f"{NAME}/profile/{name}//wall-clock", seconds, "s") + + +def main() -> None: + if sys.argv[1:] == ["--print-prefix"]: + print(Path(__file__).resolve().parent.parent) + return + + if sys.argv[1:] == ["--githash"]: + run("lean", "--githash") + return + + parser = argparse.ArgumentParser() + parser.add_argument("lean", type=Path) + parser.add_argument("--setup", type=Path) + args, _ = parser.parse_known_args() + + lean: Path = args.lean + setup: Path = args.setup + + module = get_module(setup) + count_lines(module, lean) + run_lean(module) + + +if __name__ == "__main__": + main() diff --git a/scripts/bench/build/fake-root/bin/lean.py b/scripts/bench/build/fake-root/bin/lean.py new file mode 120000 index 000000000..819298943 --- /dev/null +++ b/scripts/bench/build/fake-root/bin/lean.py @@ -0,0 +1 @@ +lean \ No newline at end of file diff --git a/scripts/bench/build/lakeprof_report_template.html b/scripts/bench/build/lakeprof_report_template.html new file mode 100644 index 000000000..740a655ec --- /dev/null +++ b/scripts/bench/build/lakeprof_report_template.html @@ -0,0 +1,74 @@ + + + + + Lakeprof Report + + +

Lakeprof Report

+ +
__LAKEPROF_REPORT__
+ + + + diff --git a/scripts/bench/build/lakeprof_report_upload.py b/scripts/bench/build/lakeprof_report_upload.py new file mode 100644 index 000000000..e49627b62 --- /dev/null +++ b/scripts/bench/build/lakeprof_report_upload.py @@ -0,0 +1,44 @@ +#!/usr/bin/env python3 + +import json +import subprocess +import sys +from pathlib import Path + + +def run(*args: str) -> None: + subprocess.run(args, check=True) + + +def run_stdout(*command: str, cwd: str | None = None) -> str: + result = subprocess.run(command, capture_output=True, encoding="utf-8", cwd=cwd) + if result.returncode != 0: + print(result.stdout, end="", file=sys.stdout) + print(result.stderr, end="", file=sys.stderr) + sys.exit(result.returncode) + return result.stdout + + +def main() -> None: + script_file = Path(__file__) + template_file = script_file.parent / "lakeprof_report_template.html" + + sha = run_stdout("git", "rev-parse", "@").strip() + base_url = f"https://speed.lean-lang.org/cslib-out/{sha}" + report = run_stdout("lakeprof", "report", "-prc") + with open(template_file) as f: + template = f.read() + + template = template.replace("__BASE_URL__", json.dumps(base_url)) + template = template.replace("__LAKEPROF_REPORT__", report) + + with open("index.html", "w") as f: + f.write(template) + + run("curl", "-T", "index.html", f"{base_url}/index.html") + run("curl", "-T", "lakeprof.log", f"{base_url}/lakeprof.log") + run("curl", "-T", "lakeprof.trace_event", f"{base_url}/lakeprof.trace_event") + + +if __name__ == "__main__": + main() diff --git a/scripts/bench/build/run b/scripts/bench/build/run new file mode 100755 index 000000000..39d34b247 --- /dev/null +++ b/scripts/bench/build/run @@ -0,0 +1,23 @@ +#!/usr/bin/env bash +set -euxo pipefail + +BENCH="scripts/bench" + +# Prepare build +lake exe cache get + +# Run build +LAKE_OVERRIDE_LEAN=true LEAN=$(realpath "$BENCH/build/fake-root/bin/lean") \ + "$BENCH/measure.py" -t build \ + -m instructions -m maxrss -m task-clock -m wall-clock -- \ + lakeprof record lake build --no-cache + +# Analyze lakeprof data +lakeprof report -pj | jq -c '{metric: "build/lakeprof/longest build path//wall-clock", value: .[-1][2], unit: "s"}' >> measurements.jsonl +lakeprof report -rj | jq -c '{metric: "build/lakeprof/longest rebuild path//wall-clock", value: .[-1][2], unit: "s"}' >> measurements.jsonl + +# Upload lakeprof report +# Guarded to prevent accidental uploads (which wouldn't work anyways) during local runs. +if [ -f build_upload_lakeprof_report ]; then + python3 "$BENCH/build/lakeprof_report_upload.py" +fi diff --git a/scripts/bench/combine.py b/scripts/bench/combine.py new file mode 100755 index 000000000..2a71f31b9 --- /dev/null +++ b/scripts/bench/combine.py @@ -0,0 +1,31 @@ +#!/usr/bin/env python3 + +import argparse +import json +from pathlib import Path + +OUTFILE = Path() / "measurements.jsonl" + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + description=f"Combine duplicated measurements in {OUTFILE.name} the way radar does, by summing their values." + ) + args = parser.parse_args() + + values: dict[str, float] = {} + units: dict[str, str | None] = {} + + with open(OUTFILE, "r") as f: + for line in f: + data = json.loads(line) + metric = data["metric"] + values[metric] = values.get(metric, 0) + data["value"] + units[metric] = data.get("unit") + + with open(OUTFILE, "w") as f: + for metric, value in values.items(): + unit = units.get(metric) + data = {"metric": metric, "value": value} + if unit is not None: + data["unit"] = unit + f.write(f"{json.dumps(data)}\n") diff --git a/scripts/bench/measure.py b/scripts/bench/measure.py new file mode 100755 index 000000000..072f4cdde --- /dev/null +++ b/scripts/bench/measure.py @@ -0,0 +1,160 @@ +#!/usr/bin/env python3 + +import argparse +import json +import os +import resource +import subprocess +import sys +import tempfile +from dataclasses import dataclass +from pathlib import Path + +OUTFILE = Path() / "measurements.jsonl" + + +@dataclass +class PerfMetric: + event: str + factor: float = 1 + unit: str | None = None + + +@dataclass +class RusageMetric: + name: str + factor: float = 1 + unit: str | None = None + + +PERF_METRICS = { + "task-clock": PerfMetric("task-clock", factor=1e-9, unit="s"), + "wall-clock": PerfMetric("duration_time", factor=1e-9, unit="s"), + "instructions": PerfMetric("instructions"), +} + +PERF_UNITS = { + "msec": 1e-3, + "ns": 1e-9, +} + +RUSAGE_METRICS = { + "maxrss": RusageMetric("ru_maxrss", factor=1000, unit="B"), # KiB on linux +} + +ALL_METRICS = {**PERF_METRICS, **RUSAGE_METRICS} + + +def measure_perf(cmd: list[str], events: list[str]) -> dict[str, tuple[float, str]]: + with tempfile.NamedTemporaryFile() as tmp: + cmd = [ + *["perf", "stat", "-j", "-o", tmp.name], + *[arg for event in events for arg in ["-e", event]], + *["--", *cmd], + ] + + # Execute command + env = os.environ.copy() + env["LC_ALL"] = "C" # or else perf may output syntactically invalid json + result = subprocess.run(cmd, env=env) + if result.returncode != 0: + sys.exit(result.returncode) + + # Collect results + perf = {} + for line in tmp: + data = json.loads(line) + if "event" in data and "counter-value" in data: + perf[data["event"]] = float(data["counter-value"]), data["unit"] + + return perf + + +@dataclass +class Result: + category: str + value: float + unit: str | None + + def fmt(self, topic: str) -> str: + metric = f"{topic}//{self.category}" + if self.unit is None: + return json.dumps({"metric": metric, "value": self.value}) + return json.dumps({"metric": metric, "value": self.value, "unit": self.unit}) + + +def measure(cmd: list[str], metrics: list[str]) -> list[Result]: + # Check args + unknown_metrics = [] + for metric in metrics: + if metric not in RUSAGE_METRICS and metric not in PERF_METRICS: + unknown_metrics.append(metric) + if unknown_metrics: + raise Exception(f"unknown metrics: {', '.join(unknown_metrics)}") + + # Prepare perf events + events: list[str] = [] + for metric in metrics: + if info := PERF_METRICS.get(metric): + events.append(info.event) + + # Measure + perf = measure_perf(cmd, events) + rusage = resource.getrusage(resource.RUSAGE_CHILDREN) + + # Extract results + results = [] + for metric in metrics: + if info := PERF_METRICS.get(metric): + if info.event in perf: + value, unit = perf[info.event] + else: + # Without the corresponding permissions, + # we only get access to the userspace versions of the counters. + value, unit = perf[f"{info.event}:u"] + + value *= PERF_UNITS.get(unit, info.factor) + results.append(Result(metric, value, info.unit)) + + if info := RUSAGE_METRICS.get(metric): + value = getattr(rusage, info.name) * info.factor + results.append(Result(metric, value, info.unit)) + + return results + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + description=f"Measure resource usage of a command using perf and rusage. The results are appended to {OUTFILE.name}.", + ) + parser.add_argument( + "-t", + "--topic", + action="append", + default=[], + help="topic prefix for the metrics", + ) + parser.add_argument( + "-m", + "--metric", + action="append", + default=[], + help=f"metrics to measure. Can be specified multiple times. Available metrics: {', '.join(sorted(ALL_METRICS))}", + ) + parser.add_argument( + "cmd", + nargs="*", + help="command to measure the resource usage of", + ) + args = parser.parse_args() + + topics: list[str] = args.topic + metrics: list[str] = args.metric + cmd: list[str] = args.cmd + + results = measure(cmd, metrics) + + with open(OUTFILE, "a+") as f: + for result in results: + for topic in topics: + f.write(f"{result.fmt(topic)}\n") diff --git a/scripts/bench/run b/scripts/bench/run new file mode 100755 index 000000000..71af3550c --- /dev/null +++ b/scripts/bench/run @@ -0,0 +1,10 @@ +#!/usr/bin/env bash +set -euo pipefail + +BENCH="scripts/bench" + +echo "Running benchmark: build" +"$BENCH/build/run" + +echo "Running benchmark: size" +"$BENCH/size/run" diff --git a/scripts/bench/size/README.md b/scripts/bench/size/README.md new file mode 100644 index 000000000..40ace6f14 --- /dev/null +++ b/scripts/bench/size/README.md @@ -0,0 +1,8 @@ +# The `size` benchmark + +This benchmark measures a few deterministic values. + +- `size/.lean//files` +- `size/.lean//lines` +- `size/.olean//files` +- `size/.olean//bytes` diff --git a/scripts/bench/size/run b/scripts/bench/size/run new file mode 100755 index 000000000..437671149 --- /dev/null +++ b/scripts/bench/size/run @@ -0,0 +1,40 @@ +#!/usr/bin/env python3 + +import json +from pathlib import Path + +OUTFILE = Path() / "measurements.jsonl" + + +def output_result(metric: str, value: float, unit: str | None = None) -> None: + data = {"metric": metric, "value": value} + if unit is not None: + data["unit"] = unit + with open(OUTFILE, "a") as f: + f.write(f"{json.dumps(data)}\n") + + +def measure_leans() -> None: + lean_files = 0 + lean_lines = 0 + for path in Path().glob("Cslib/**/*.lean"): + lean_files += 1 + with open(path) as f: + lean_lines += sum(1 for _ in f) + output_result("size/.lean//files", lean_files) + output_result("size/.lean//lines", lean_lines) + + +def measure_oleans() -> None: + olean_files = 0 + olean_bytes = 0 + for path in Path().glob(".lake/build/**/*.olean"): + olean_files += 1 + olean_bytes += path.stat().st_size + output_result("size/.olean//files", olean_files) + output_result("size/.olean//bytes", olean_bytes, "B") + + +if __name__ == "__main__": + measure_leans() + measure_oleans() diff --git a/scripts/bench/size/run.py b/scripts/bench/size/run.py new file mode 120000 index 000000000..e5224d533 --- /dev/null +++ b/scripts/bench/size/run.py @@ -0,0 +1 @@ +run \ No newline at end of file diff --git a/scripts/create-adaptation-pr.sh b/scripts/create-adaptation-pr.sh index 11ffa1399..01587009c 100755 --- a/scripts/create-adaptation-pr.sh +++ b/scripts/create-adaptation-pr.sh @@ -104,7 +104,7 @@ if git diff --name-only --diff-filter=U | grep -q .; then echo "### [auto] Conflict resolution" echo "### Automatically choosing 'lean-toolchain' and 'lake-manifest.json' from the newer branch" echo "### In this case, the newer branch is 'bump/$BUMPVERSION'" - git checkout bump/$BUMPVERSION -- lean-toolchain lake-manifest.json + git checkout bump/"$BUMPVERSION" -- lean-toolchain lake-manifest.json git add lean-toolchain lake-manifest.json # Check if there are more merge conflicts after auto-resolution @@ -128,9 +128,9 @@ while git diff --name-only --diff-filter=U | grep -q . || ! git diff-index --qui echo "We are merging the latest changes from 'origin/main' into 'bump/$BUMPVERSION'" echo "There seem to be conflicts or uncommitted files" echo "" - echo " 1) Open `pwd` in a new terminal and run 'git status'" + echo " 1) Open $(pwd) in a new terminal and run 'git status'" echo " 2) Make sure to commit the resolved conflicts, but do not push them" - read -p " 3) Press enter to continue, when you are done" + read -rp " 3) Press enter to continue, when you are done" done echo "All conflicts resolved and committed." @@ -141,14 +141,14 @@ echo echo "### [auto] create a new branch 'bump/nightly-$NIGHTLYDATE' and merge the latest changes from 'origin/nightly-testing'" git checkout -b "bump/nightly-$NIGHTLYDATE" || git checkout "bump/nightly-$NIGHTLYDATE" -git merge --no-edit $NIGHTLYSHA || true # ignore error if there are conflicts +git merge --no-edit "$NIGHTLYSHA" || true # ignore error if there are conflicts # Check if there are merge conflicts if git diff --name-only --diff-filter=U | grep -q .; then echo echo "### [auto] Conflict resolution" echo "### Automatically choosing 'lean-toolchain', 'lake-manifest.json', and 'lakefile.toml' from 'nightly-testing'" - git checkout $NIGHTLYSHA -- lean-toolchain lake-manifest.json lakefile.toml + git checkout "$NIGHTLYSHA" -- lean-toolchain lake-manifest.json lakefile.toml git add lean-toolchain lake-manifest.json lakefile.toml fi @@ -168,9 +168,9 @@ if git diff --name-only --diff-filter=U | grep -q .; then echo "$NIGHTLYSHA" echo "There seem to be conflicts: please resolve them" echo "" - echo " 1) Open `pwd` in a new terminal and run 'git status'" + echo " 1) Open $(pwd) in a new terminal and run 'git status'" echo " 2) Run 'git add' on the resolved files, but do not commit" - read -p " 3) Press enter to continue, when you are done" + read -rp " 3) Press enter to continue, when you are done" fi echo @@ -185,7 +185,7 @@ git commit --allow-empty -m "$pr_title" git push --set-upstream origin "bump/nightly-$NIGHTLYDATE" # Check if there is a diff between bump/nightly-$NIGHTLYDATE and bump/$BUMPVERSION -if git diff --name-only bump/$BUMPVERSION bump/nightly-$NIGHTLYDATE | grep -q .; then +if git diff --name-only bump/"$BUMPVERSION" bump/nightly-"$NIGHTLYDATE" | grep -q .; then echo echo "### [auto] create a PR for the new branch" @@ -193,9 +193,9 @@ if git diff --name-only bump/$BUMPVERSION bump/nightly-$NIGHTLYDATE | grep -q .; echo "Running the following 'gh' command to do this:" gh_command="gh pr create -t \"$pr_title\" -b '' -B bump/$BUMPVERSION" echo "> $gh_command" - gh_output=$(eval $gh_command) + gh_output=$(eval "$gh_command") # Extract the PR number from the output - pr_number=$(echo $gh_output | sed 's/.*\/pull\/\([0-9]*\).*/\1/') + pr_number=$(echo "$gh_output" | sed 's/.*\/pull\/\([0-9]*\).*/\1/') echo echo "### [auto] post a link to the PR on Zulip" @@ -212,14 +212,14 @@ if git diff --name-only bump/$BUMPVERSION bump/nightly-$NIGHTLYDATE | grep -q .; zulip_command="zulip-send --stream nightly-testing --subject \"$zulip_title\" --message \"$zulip_body\"" echo "Running the following 'zulip-send' command to do this:" echo "> $zulip_command" - eval $zulip_command + eval "$zulip_command" else echo "Zulip CLI is not installed. Install it to send messages automatically." if [ "$AUTO" = "yes" ]; then exit 1 else echo "Please send the message manually." - read -p "Press enter to continue" + read -rp "Press enter to continue" fi fi @@ -245,7 +245,7 @@ if git diff --name-only --diff-filter=U | grep -q .; then echo "### [auto] Conflict resolution" echo "### Automatically choosing lean-toolchain, lake-manifest.json, and lakefile.toml from the newer branch" echo "### In this case, the newer branch is 'bump/nightly-$NIGHTLYDATE'" - git checkout bump/nightly-$NIGHTLYDATE -- lean-toolchain lake-manifest.json lakefile.toml + git checkout bump/nightly-"$NIGHTLYDATE" -- lean-toolchain lake-manifest.json lakefile.toml git add lean-toolchain lake-manifest.json lakefile.toml # Check if there are more merge conflicts after auto-resolution @@ -268,12 +268,12 @@ fi while git diff --name-only --diff-filter=U | grep -q . || ! git diff-index --quiet HEAD --; do echo echo "### [user] Conflict resolution" - echo "We are merging the new PR "bump/nightly-$NIGHTLYDATE" into 'nightly-testing'" + echo "We are merging the new PR bump/nightly-$NIGHTLYDATE into 'nightly-testing'" echo "There seem to be conflicts or uncommitted files" echo "" - echo " 1) Open `pwd` in a new terminal and run 'git status'" + echo " 1) Open $(pwd) in a new terminal and run 'git status'" echo " 2) Make sure to commit the resolved conflicts, but do not push them" - read -p " 3) Press enter to continue, when you are done" + read -rp " 3) Press enter to continue, when you are done" done echo "All conflicts resolved and committed." @@ -283,7 +283,7 @@ git push echo echo "### [auto] finished: checkout the original branch" -git checkout $usr_branch +git checkout "$usr_branch" # These last two lines are needed to make the script robust against changes on disk # that might have happened during the script execution, e.g. from switching branches diff --git a/scripts/noshake.json b/scripts/noshake.json index 50b475be7..64211232d 100644 --- a/scripts/noshake.json +++ b/scripts/noshake.json @@ -2,7 +2,6 @@ "ignoreImport" : ["Cslib.Init", "Init", "Mathlib.Tactic.Bound"], "ignore":{ "Cslib.Languages.LambdaCalculus.LocallyNameless.Fsub.Basic": ["Cslib.Foundations.Data.HasFresh"], - "Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.Basic": ["Cslib.Foundations.Data.HasFresh"], - "Cslib.Foundations.Semantics.ReductionSystem.Basic": ["Mathlib.Util.Notation3"] + "Cslib.Languages.LambdaCalculus.LocallyNameless.Untyped.Basic": ["Cslib.Foundations.Data.HasFresh"] } }