This service provides waste collection dates for properties in Swindon, UK. It scrapes the Swindon Borough Council website to get the latest collection information.
The API endpoint is /[UPRN]/[format].
UPRN: The Unique Property Reference Number for your address.format: The output format. Can bejson(default) orics(for iCalendar).
Optional Parameters:
?debug=yes: Enable debug logging.?icons=yes: Include icon data in the JSON output.
To improve performance and reduce the load on the Swindon Borough Council website, this service implements a caching mechanism.
When running locally (APP_ENV=development), the service uses a local SQLite database (sbcwaste.db) for caching. The database file is created automatically in the root of the project directory.
When deployed to the Google Cloud environment, the service uses Firestore for caching. The application will automatically create and use a collection named sbcwaste_cache in your project's default Firestore database.
The cache expiry time is configurable. By default, it is set to 3 days (259200 seconds). You can change this value by modifying the CACHE_EXPIRY_SECONDS input in the GitHub Actions workflow when running it manually.
Local (SQLite):
To clear the local cache, simply delete the sbcwaste.db file from the project directory.
Cloud (Firestore):
To clear the Firestore cache in the cloud environment, you can delete the documents in the sbcwaste_cache collection using the Google Cloud Console:
- Navigate to the Firestore page in the Google Cloud Console.
- Select your project and you should see the
sbcwaste_cachecollection. - You can manually delete individual documents (cached items) or delete the entire collection by clicking the three dots next to the collection name and selecting "Delete collection".
- Install dependencies:
go mod tidy - Run the application:
go run ./src - Run tests:
go test ./...
- The application uses a local SQLite database (
sbcwaste.db) for caching in development. This file is git-ignored. - The compiled binary (
sbcwaste) is also git-ignored. - The application uses
chromedpfor web scraping. You may need to have chromium installed. On ubuntu, you can usesudo apt install -y chromium-browser - The application can be containerised using the provided
Dockerfile. - The CI/CD pipeline is defined in
.github/workflows/google-cloudrun-docker.yml.
To add a new council, you will need to:
- Create a new scraper function for the council's website.
- Implement the
Councilinterface for the new council. - Add the new council to the
CouncilFactoryso that it can be selected based on postcode or other identifier. - Add tests for the new council's scraper.
- Update the documentation to include the new council.
- Consider using a headless browser library like
chromedpfor interactive web scraping.
Remember to add tests for any new functionality to prevent regressions.