Skip to content

Commit e50643a

Browse files
committed
feat: rename project to OcotilloAPI and update CLI command references
1 parent 48494b6 commit e50643a

3 files changed

Lines changed: 53 additions & 13 deletions

File tree

README.md

Lines changed: 39 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# NMSampleLocations
1+
# NMSampleLocations aka OcotilloAPI
22

33
[![Code Format](https://github.com/DataIntegrationGroup/NMSampleLocations/actions/workflows/format_code.yml/badge.svg)](https://github.com/DataIntegrationGroup/NMSampleLocations/actions/workflows/format_code.yml)
44
[![Dependabot Updates](https://github.com/DataIntegrationGroup/NMSampleLocations/actions/workflows/dependabot/dependabot-updates/badge.svg)](https://github.com/DataIntegrationGroup/NMSampleLocations/actions/workflows/dependabot/dependabot-updates)
@@ -9,7 +9,8 @@
99
**Geospatial Sample Data Management System**
1010
_New Mexico Bureau of Geology and Mineral Resources_
1111

12-
NMSampleLocations is a FastAPI-based backend service designed to manage geospatial sample location data across New Mexico. It supports research, field operations, and public data delivery for the Bureau of Geology and Mineral Resources.
12+
OcotilloAPI is a FastAPI-based backend service designed to manage geospatial sample location data across New Mexico. It
13+
supports research, field operations, and public data delivery for the Bureau of Geology and Mineral Resources.
1314

1415
---
1516

@@ -197,4 +198,39 @@ Notes:
197198
- All `Update` schema fields are optional and default to `None`
198199
- All `Response` schema fields are defined as `<type>` if non-nullable and `<type> | None` if nullable
199200
- All raised exceptions should use the `PydanticStyleException` as defined in `services/exceptions_helper.py`
200-
- Errors handled by the database should be enumerated and handled in a database_error_handler in each router's file
201+
- Errors handled by the database should be enumerated and handled in a database_error_handler in each router's file---
202+
203+
## 📦 Ocotillo CLI
204+
205+
The `oco` command exposes project automation and bulk data utilities.
206+
207+
```bash
208+
# Display available commands
209+
oco --help
210+
211+
# Bulk import water level data from a CSV
212+
oco water-levels bulk-upload --file water_levels.csv --output json
213+
```
214+
215+
The bulk upload command parses and validates each row, creates the corresponding field events/samples/observations, and prints a JSON summary (matching the API response shape) so the workflow can be automated or scripted.
216+
## 🧪 Testing
217+
218+
```bash
219+
# Run unit tests
220+
pytest
221+
222+
# Run Behave BDD specs
223+
behave tests/features
224+
```
225+
226+
> Tests require a local Postgres/PostGIS instance. Set `POSTGRES_*` values in `.env`, run migrations, and ensure the database is reachable before running the suites.
227+
228+
## 🔄 Data Transfers
229+
230+
Legacy or staging datasets can be imported using the transfer utilities:
231+
232+
```bash
233+
python -m transfers.transfer
234+
```
235+
236+
Configure the `.env` file with the appropriate credentials before running transfers.

pyproject.toml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[project]
2-
name = "nmsamplelocations"
2+
name = "OcotilloAPI"
33
version = "0.1.0"
4-
description = "Add your description here"
4+
description = "FastAPI backend and CLI for managing Ocotillo groundwater locations, wells, assets, and bulk water-level data transfers."
55
readme = "README.md"
66
requires-python = ">=3.13"
77
dependencies = [
@@ -99,6 +99,10 @@ dependencies = [
9999
"uvicorn==0.38.0",
100100
"yarl==1.20.1",
101101
]
102+
103+
[project.scripts]
104+
oco = "cli.cli:cli"
105+
102106
[tool.alembic]
103107

104108
# path to migration scripts.

tests/features/water-level-csv.feature

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Feature: Bulk upload water level entries from CSV via CLI
4444
| water_level_notes |
4545
When I run the CLI command:
4646
"""
47-
bdms water-levels bulk-upload --file ./water_levels.csv --output json
47+
oco water-levels bulk-upload --file ./water_levels.csv --output json
4848
"""
4949
Then the command exits with code 0
5050
And stdout should be valid JSON
@@ -71,7 +71,7 @@ Feature: Bulk upload water level entries from CSV via CLI
7171
| data_quality |
7272
When I run the CLI command:
7373
"""
74-
bdms water-levels bulk-upload --file ./water_levels.csv
74+
oco water-levels bulk-upload --file ./water_levels.csv
7575
"""
7676
Then the command exits with code 0
7777
And all water level entries are imported
@@ -82,7 +82,7 @@ Feature: Bulk upload water level entries from CSV via CLI
8282
Given my CSV file contains extra columns but is otherwise valid
8383
When I run the CLI command:
8484
"""
85-
bdms water-levels bulk-upload --file ./water_levels.csv
85+
oco water-levels bulk-upload --file ./water_levels.csv
8686
"""
8787
Then the command exits with code 0
8888
And all water level entries are imported
@@ -97,7 +97,7 @@ Feature: Bulk upload water level entries from CSV via CLI
9797
Given my CSV file contains 3 rows of data with 2 valid rows and 1 row missing the required "well_name_point_id"
9898
When I run the CLI command:
9999
"""
100-
bdms water-levels bulk-upload --file ./water_levels.csv
100+
oco water-levels bulk-upload --file ./water_levels.csv
101101
"""
102102
Then the command exits with a non-zero exit code
103103
And stderr should contain a validation error for the row missing "well_name_point_id"
@@ -108,7 +108,7 @@ Feature: Bulk upload water level entries from CSV via CLI
108108
Given my CSV file contains a row missing the required "<required_field>" field
109109
When I run the CLI command:
110110
"""
111-
bdms water-levels bulk-upload --file ./water_levels.csv
111+
oco water-levels bulk-upload --file ./water_levels.csv
112112
"""
113113
Then the command exits with a non-zero exit code
114114
And stderr should contain a validation error for the "<required_field>" field
@@ -130,7 +130,7 @@ Feature: Bulk upload water level entries from CSV via CLI
130130
Given my CSV file contains invalid ISO 8601 date values in the "measurement_date_time" field
131131
When I run the CLI command:
132132
"""
133-
bdms water-levels bulk-upload --file ./water_levels.csv
133+
oco water-levels bulk-upload --file ./water_levels.csv
134134
"""
135135
Then the command exits with a non-zero exit code
136136
And stderr should contain validation errors identifying the invalid field and row
@@ -141,7 +141,7 @@ Feature: Bulk upload water level entries from CSV via CLI
141141
Given my CSV file contains values that cannot be parsed as numeric in numeric-required fields such as "mp_height" or "depth_to_water_ft"
142142
When I run the CLI command:
143143
"""
144-
bdms water-levels bulk-upload --file ./water_levels.csv
144+
oco water-levels bulk-upload --file ./water_levels.csv
145145
"""
146146
Then the command exits with a non-zero exit code
147147
And stderr should contain validation errors identifying the invalid field and row
@@ -152,7 +152,7 @@ Feature: Bulk upload water level entries from CSV via CLI
152152
Given my CSV file contains invalid lexicon values for "sampler", "sample_method", "level_status", or "data_quality"
153153
When I run the CLI command:
154154
"""
155-
bdms water-levels bulk-upload --file ./water_levels.csv
155+
oco water-levels bulk-upload --file ./water_levels.csv
156156
"""
157157
Then the command exits with a non-zero exit code
158158
And stderr should contain validation errors identifying the invalid field and row

0 commit comments

Comments
 (0)