pip install -r requirements.txtcrawl4ai-setupnpm install git+https://github.com/arnav-exe/amazon-product-api.git#7a2d602- create
.envfile in the root directory with these two keys:BESTBUY_API_KEY,NTFY_TOPIC_URL(see.env.examplefor formats)
sudo apt updatesudo apt-get install xvfb
- for each product:
- for each identifier inside a product:
- if we have a matching data source for that particular identiifer, run 'fetch_product()' which will return data in Product obj format
- check in stock and sale keys against user specification
- if any condition is met, fire appropriate ntfy
- for each identifier inside a product:
- microcenter - crawl4ai
- costco - crawl4ai
- strategy pattern - main calls a generic 'execute' function to fetch data regardless of datasource.
- adapter pattern - Each datasource is separately implemented, following a 3 stage flow:
- fetching data from src (via api)
- conforming data into internal representation (shown below)
- consuming normalized data
From the schema.py Product dataclass
@dataclass
class Product:
identifier: str # EG: sku, asin code. etc.
product_name: str
in_stock: bool
on_sale: bool
sale_price: float
regular_price: float
product_url: str
retailer_name: str
retailer_logo: str # retailer logo url- Persistently store hash of all products (both current and historical) and mapping to a uuid4 str which is its NTFY topic URL
- Have a master NTFY topic that user is subscribed to
- For every new item, generate new NTFY URL, save item mapping, and send notification via master topic with link attachment that looks like this:
ntfy://ntfy.sh/{ntfy_topic_url}?display={item_name} - when user clicks on link, it will open topic and automatically subscribe
- all notifications for that particular item will be sent through that topic