← Back to Usage | Workflow → | Examples →
- Overview
- Output Formats
- Saving Output
- Output Directory Structure
- Format Selection Guide
- Practical Examples
- Troubleshooting
The ModelChecker provides flexible output options to suit different workflows, from quick terminal checks to comprehensive documentation generation. Output can be displayed in the terminal or saved in multiple formats for further analysis, reporting, or interactive exploration.
Architecture Context: For the complete output generation system design, see Output Architecture. For how outputs fit into the overall pipeline, see Pipeline Architecture.
Default output displayed in the console with color-coding and formatting:
# Basic terminal output
model-checker examples/test.py
# Standard output
model-checker examples/test.pyTerminal Output Features:
- Color-coded results (VALID/INVALID indicators)
- Formatted model displays with truth tables
- Progress indicators for long-running operations
- Inline countermodel presentation
Machine-readable format for programmatic processing:
# Save as JSON
model-checker examples/test.py --save jsonJSON Structure:
{
"example": "MODUS_PONENS",
"theory": "logos",
"has_model": false,
"evaluation_world": "w0",
"states": [
{"state": "s0", "possible": true, "world": true}
],
"relations": {
"part_of": [["s0", "s1"]],
"fusion": [["s0", "s1", "s2"]]
},
"propositions": {
"A": {"verifiers": ["s0"], "falsifiers": []},
"B": {"verifiers": ["s1"], "falsifiers": []}
},
"output": "Example MODUS_PONENS: there is no countermodel"
}Use Cases:
- Automated testing and CI/CD pipelines
- Data analysis and visualization
- Integration with other tools
- Archival and version control
Human-readable documentation format:
# Save as Markdown
model-checker examples/test.py --save markdownMarkdown Output Example:
# Example: MODUS_PONENS
## Settings
- Theory: logos
- N: 3
- Max Time: 30s
## Premises
1. A
2. A → B
## Conclusions
1. B
## Result: VALID
The inference is valid. No countermodel exists.Use Cases:
- Documentation and reports
- GitHub/GitLab integration
- Academic papers and presentations
- Team communication
Interactive notebook for exploration and teaching:
# Generate Jupyter notebook
model-checker examples/test.py --save notebookNotebook Features:
- Re-runnable code cells
Architecture Reference: For the complete Jupyter integration design, see Jupyter Architecture.
- Inline explanations and markdown
- Interactive model exploration
- Visualization capabilities
Generated Notebook Structure:
- Setup and imports
- Theory configuration
- Example definitions
- Execution and results
- Analysis and exploration cells
# Core save options
--save # Interactive mode - prompts for format
--save json # Save as JSON
--save markdown # Save as Markdown
--save notebook # Generate Jupyter notebook
--save # Save in all formats
# Additional options available through example file settings
# See SETTINGS.md for configuration optionsThe Interactive Save Mode provides fine-grained control over which model checking results are saved, allowing users to selectively save models and iterate through additional solutions interactively.
# Sequential save mode - prompts after each model
model-checker --save --sequential examples/my_logic.py
model-checker --save -q examples/my_logic.py # Short form
# Save specific formats sequentially
model-checker --save markdown json --sequential examples/my_logic.py-
Mode Selection (if
-qnot specified):Select save mode: 1) batch - Save all at end 2) sequential - Prompt after each model -
Model Save Prompt (after each model):
Save model for EXAMPLE_NAME? (Y/n): -
Iteration Prompt (after saving decision):
Find additional models? (y/N): -
Directory Navigation (at completion):
All models saved to: /path/to/output_20250804_123456 Change to output directory? (y/N):
In interactive mode, outputs are organized by example:
output_YYYYMMDD_HHMMSS/
├── EXAMPLE_1/
│ ├── MODEL_1.md # Formatted model with colors
│ ├── MODEL_1.json # Structured data
│ ├── MODEL_2.md # Second model (if saved)
│ └── MODEL_2.json
├── EXAMPLE_2/
│ ├── MODEL_1.md
│ └── MODEL_1.json
├── summary.json # Interactive session summary
└── MODELS.json # All models in single file
Save results from multiple examples efficiently:
# Save all theory examples
model-checker theory_lib/logos/examples.py --save markdown
# Process multiple files
for file in examples/*.py; do
model-checker "$file" --save json
done
# Save with pattern matching - all formats
for file in examples/*_test.py; do
model-checker "$file" --save
doneThe ModelChecker organizes saved output hierarchically:
output/ # Default output directory
├── [theory]_[module]/ # Theory and module name
│ ├── [EXAMPLE_NAME]/ # Individual example directory
│ │ ├── summary.md # Markdown summary
│ │ ├── result.json # JSON data
│ │ ├── MODEL_1.json # First model (if multiple)
│ │ ├── MODEL_2.json # Second model
│ │ └── countermodel.json # Countermodel (if invalid)
│ └── README.md # Summary of all examples
├── notebooks/ # Generated notebooks
│ └── [module]_notebook.ipynb # Interactive notebook
└── reports/ # Custom reports
└── comparison_report.md # Theory comparison results
See Interactive Save Mode section above for the specialized directory structure used in sequential save mode.
# In your example file, define custom output settings
OUTPUT_SETTINGS = {
"output_dir": "my_results",
"organize_by": "date", # or "theory", "validity"
"include_metadata": True,
"compress": True # Create .zip archive
}Choose the appropriate format based on your use case:
| Use Case | Recommended Format | Command |
|---|---|---|
| Quick validation check | Terminal | model-checker file.py |
| Automated testing | JSON | --save json |
| Documentation | Markdown | --save markdown |
| Interactive exploration | Notebook | --save notebook |
| Complete archive | All | --save |
| CI/CD pipeline | JSON | --save json |
| Academic paper | Markdown + JSON | --save markdown json |
Here's an example of counterfactual logic evaluation output:
========================================
EXAMPLE CF_CM_7: there is a countermodel.
Atomic States: 4
Semantic Theory: Brast-McKie
Premise:
1. (A \boxright B)
Conclusion:
2. (\neg B \boxright \neg A)
Z3 Run Time: 0.0131 seconds
========================================
State Space:
#b0111 = a.b.c (world)
#b1011 = a.b.d (world)
#b1101 = a.c.d (world)
#b1110 = b.c.d (world)
The evaluation world is: a.c.d
INTERPRETED PREMISE:
1. |(A \boxright B)| = < {a.c.d}, {a.b.c, a.b.d, b.c.d} > (True in a.c.d)
|A| = < {c.d}, {a.b} > (True in a.c.d)
|A|-alternatives to a.c.d = {a.c.d}
|B| = < {a.c.d}, {a.b, a.b.c, a.b.c.d, a.b.d, b.d} > (True in a.c.d)
INTERPRETED CONCLUSION:
2. |(\neg B \boxright \neg A)| = < {a.b.d}, {a.b.c, a.c.d, b.c.d} > (False in a.c.d)
|\neg B| = < {a.b, a.b.c, a.b.c.d, a.b.d, b.d}, {a.c.d} > (False in a.c.d)
|\neg B|-alternatives to a.c.d = {a.b.c, a.b.d, b.c.d}
|\neg A| = < {a.b}, {c.d} > (False in b.c.d)
Total Run Time: 0.4047 seconds
========================================
Key output components for counterfactual logic:
- State Space: Possible worlds in the model
- Evaluation World: Where the argument is evaluated
- Counterfactual Alternatives: Nearest possible worlds for counterfactual conditionals
- Verifiers/Falsifiers: States that make propositions true or false
- Truth Values: Color-coded indicators (Green=True, Red=False)
# Generate comprehensive documentation
model-checker theory_lib/logos/examples.py \
--save markdown \
--verbose \
--output-dir docs/validation/
# Results in:
# docs/validation/logos_examples/
# ├── EXT_TH_1/summary.md
# ├── MODAL_TH_1/summary.md
# └── README.md#!/bin/bash
# test_pipeline.sh
OUTPUT_DIR="test_results/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$OUTPUT_DIR"
# Run tests and save JSON results
for theory in logos exclusion imposition; do
model-checker "theory_lib/$theory/examples.py" \
--save json \
--output-dir "$OUTPUT_DIR/$theory" \
# No quiet flag needed
done
# Generate summary report
python analyze_results.py "$OUTPUT_DIR" > "$OUTPUT_DIR/summary.md"# Generate notebook for teaching
model-checker examples/logic_basics.py \
--save notebook \
--output-dir workshop_materials/
# Launch Jupyter
jupyter notebook workshop_materials/notebooks/logic_basics_notebook.ipynb# Save everything for detailed analysis
model-checker problematic_example.py \
--save \
--print_constraints \
--print_z3
# Examine results
ls -la output/problematic_example/
cat output/problematic_example/combined_output.mdIssue: Output directory permission denied
# Solution: Use a different directory or fix permissions
model-checker file.py --save json --output-dir ~/Documents/results/
# OR
sudo chmod 755 output/Issue: Large output files
# Solution: Use JSON for large datasets, compress output
model-checker large_test.py --save json
gzip output/theory_module/*.jsonIssue: Overwriting existing files
# Solution: Remove existing output or use different filenames
rm -rf output/
model-checker file.py --save
# OR use different example names to avoid conflictsIssue: Notebook generation fails
# Solution: Ensure Jupyter is installed
pip install jupyter ipywidgets
model-checker file.py --save notebook- Use JSON for large datasets - Most efficient format
- Batch similar operations - Process multiple files together
- Disable terminal output when saving - Use
--no-terminal - Compress archived results - Use
gziporzipfor storage
- Workflow Guide - Complete ModelChecker workflows
- Examples Guide - Writing example files
- Architecture Documentation - Output system design
- API Reference - Output module API