Skip to content

Latest commit

 

History

History
168 lines (120 loc) · 5.26 KB

File metadata and controls

168 lines (120 loc) · 5.26 KB

Target Scrapers - Selenium (Python)

A collection of production-ready Python scrapers for extracting data from target.com using Selenium. These scrapers provide browser automation using Selenium WebDriver, making them ideal for scraping dynamic content with legacy browser support.

Overview

This directory contains Python scrapers built with Selenium.

Available Scrapers

1. Product Category Scraper

Documentation: product_category/README.md

Quick Start:

cd product_category
pip install selenium beautifulsoup4 requests
python scraper/*.py

2. Product Data Scraper

Documentation: product_data/README.md

Quick Start:

cd product_data
pip install selenium beautifulsoup4 requests
python scraper/*.py

3. Product Search Scraper

Documentation: product_search/README.md

Quick Start:

cd product_search
pip install selenium beautifulsoup4 requests
python scraper/*.py

Why Selenium?

Selenium is the best choice when:

  • ✅ Pages require JavaScript rendering
  • ✅ You prefer mature, widely-used framework
  • ✅ You need WebDriver protocol support
  • ✅ You want extensive community resources

Consider static parsing (BeautifulSoup) when:

  • ❌ Pages don't require JavaScript
  • ❌ You need maximum speed
  • ❌ You want minimal dependencies

Prerequisites

  • Python: Python 3.7 or higher
  • pip: Python package manager
  • ScrapeOps API Key: For anti-bot protection (free tier available)

Installation

  1. Navigate to the specific scraper directory:
cd product_category  # or product_data, product_search
  1. Install dependencies:
pip install selenium beautifulsoup4 requests
  1. Get your ScrapeOps API key from https://scrapeops.io/app/register/ai-builder

  2. Update the API key in the scraper file:

API_KEY = 'YOUR-API-KEY'

Anti-Bot Protection

All scrapers can integrate with ScrapeOps to help handle target's anti-bot measures:

  • Proxy rotation (may help reduce IP blocking)
  • Request header optimization (can help reduce detection)
  • Rate limiting management

Note: Anti-bot measures vary by site and may change over time. CAPTCHA challenges may occur and cannot be guaranteed to be resolved automatically. Using proxies and browser automation can help reduce blocking, but effectiveness depends on the target site's specific anti-bot measures.

Free Tier Available: ScrapeOps offers a generous free tier perfect for testing and small-scale scraping.

Output Format

All scrapers output data in JSONL format (one JSON object per line):

  • Each line represents one product/result
  • Efficient for large datasets
  • Easy to process line-by-line
  • Can be imported into databases or data processing tools

Example output files:

  • {site}_com_product_category_page_scraper_data_20260114_120000.jsonl
  • {site}_com_product_page_scraper_data_20260114_120000.jsonl
  • {site}_com_product_search_page_scraper_data_20260114_120000.jsonl

Alternative Implementations

This repository provides multiple implementations for different use cases:

Python Alternatives

Node.js Alternatives

Project Structure

selenium/
selenium/
├── product_category/
│   ├── scraper/
│   │   └── target_scraper_product_category_v1.py
│   ├── example/
│   │   └── product_category.json
│   └── README.md
├── product_data/
│   ├── scraper/
│   │   └── target_scraper_product_data_v1.py
│   ├── example/
│   │   └── product_data.json
│   └── README.md
├── product_search/
│   ├── scraper/
│   │   └── target_scraper_product_search_v1.py
│   ├── example/
│   │   └── product_search.json
│   └── README.md

Best Practices

  1. Respect Rate Limits: Use appropriate delays and concurrency settings
  2. Monitor ScrapeOps Usage: Track your API usage in the ScrapeOps dashboard
  3. Handle Errors Gracefully: Implement proper error handling and logging
  4. Validate URLs: Ensure URLs are valid target pages before scraping
  5. Update Selectors: target may change HTML structure; update selectors as needed
  6. Test Regularly: Test scrapers regularly to catch breaking changes early
  7. Handle Missing Data: Some products may not have all fields; handle null values appropriately

Support & Resources

License

This scraper is provided as-is for educational and commercial use. Please ensure compliance with target's Terms of Service and robots.txt when using these scrapers.