diff --git a/docs/docs/assets/images/ADF_Snowflake_Pipeline.png b/docs/docs/assets/images/ADF_Snowflake_Pipeline.png new file mode 100644 index 0000000000..8f4607a7d1 Binary files /dev/null and b/docs/docs/assets/images/ADF_Snowflake_Pipeline.png differ diff --git a/docs/docs/assets/images/dbrx_snowflake_benchmark.png b/docs/docs/assets/images/dbrx_snowflake_benchmark.png new file mode 100644 index 0000000000..1c12ba3a9b Binary files /dev/null and b/docs/docs/assets/images/dbrx_snowflake_benchmark.png differ diff --git a/docs/docs/assets/images/global_co_explorer.png b/docs/docs/assets/images/global_co_explorer.png new file mode 100644 index 0000000000..2db0d613ce Binary files /dev/null and b/docs/docs/assets/images/global_co_explorer.png differ diff --git a/docs/docs/assets/images/nyc_taxi.png b/docs/docs/assets/images/nyc_taxi.png new file mode 100644 index 0000000000..0e8d9b7fbb Binary files /dev/null and b/docs/docs/assets/images/nyc_taxi.png differ diff --git a/docs/docs/assets/images/spotify_analytics.png b/docs/docs/assets/images/spotify_analytics.png new file mode 100644 index 0000000000..ea730063b6 Binary files /dev/null and b/docs/docs/assets/images/spotify_analytics.png differ diff --git a/docs/docs/assets/images/us_home_sales.png b/docs/docs/assets/images/us_home_sales.png new file mode 100644 index 0000000000..c1e8ee674d Binary files /dev/null and b/docs/docs/assets/images/us_home_sales.png differ diff --git a/docs/docs/examples/index.md b/docs/docs/examples/index.md index 070aeb2142..2375ccd9f5 100644 --- a/docs/docs/examples/index.md +++ b/docs/docs/examples/index.md @@ -2,48 +2,74 @@ Real-world examples showing what altimate can do across data engineering workflows. Each example demonstrates end-to-end automation — from discovery to implementation. -
+--- -- :material-pipe:{ .lg .middle } **Build, Test & Document dbt Models** +## NYC Taxi Coverage Dashboard - --- +`DuckDB` `dbt` `Airflow` `Python` - Pull context from your Knowledge Hub, grab requirements from a Jira ticket, and build fully tested dbt models — all from your IDE. +**Prompt:** +> Take the New York City taxi cab public dataset, bring up a DuckDB instance, and build a dashboard showing areas of maximum coverage and lowest coverage. Set up a complete dbt project with staging, intermediate, and mart layers, and create an Airflow DAG to orchestrate the pipeline. -- :material-snowflake:{ .lg .middle } **Find Broken Views in Snowflake** +![NYC Taxi Coverage Dashboard](../assets/images/nyc_taxi.png) - --- +--- - Create a "Sprint Work Agent" that queries Snowflake, finds empty views, traces root causes through dbt models, and files Jira tickets. +## Olist E-Commerce Analytics Pipeline +`Snowflake` `Azure Data Factory` `Azure Blob Storage` `dbt` -- :material-cash-multiple:{ .lg .middle } **Optimize Cost & Performance** +**Prompt:** - --- +> Build an end-to-end e-commerce analytics pipeline using the Olist Brazilian E-Commerce dataset. Use Azure Data Factory to ingest CSV files from Blob Storage into Snowflake raw tables, then orchestrate Snowflake stored procedures to transform data through raw → staging → mart layers (star schema with customer, product, seller dimensions and orders fact table). Create mart views for customer lifetime value, seller performance scores, and delivery SLA compliance. - Automate discovery and implementation of optimization opportunities across Snowflake, Databricks, and BigQuery. +![ADF Snowflake Pipeline](../assets/images/ADF_Snowflake_Pipeline.png) +--- -- :material-swap-horizontal:{ .lg .middle } **Migrate PySpark to dbt** +## Global CO2 & Climate Explorer - --- +`DuckDB-WASM` `SQL` `Browser` - Convert a PySpark-based reporting project in Databricks to dbt with automated code conversion, testing, and validation. +**Prompt:** +> Build me an interactive Global CO2 & Climate Explorer dashboard using DuckDB-WASM running entirely in the browser, sourcing data from Our World in Data's CO2 dataset. Give me surprising insights about who emits the most, how that's changing, the equity angle of per-capita emissions, and which countries bear the most historical responsibility. Include an interactive SQL console with example queries showing off CTEs, window functions (LAG, RANK, SUM OVER), and make it a single index.html with a dark theme. -- :material-bug:{ .lg .middle } **Debug an Airflow DAG** +![Global CO2 Explorer](../assets/images/global_co_explorer.png) - --- +--- - Use AI to debug Airflow DAGs by combining platform integrations, best-practice templates, and automated fix suggestions. +## Spotify Analytics Pipeline Migration +`PySpark` `dbt` `Databricks` `Airflow` -- :material-function:{ .lg .middle } **Write Snowflake UDFs** +**Prompt:** - --- +> Modernize my Spotify analytics pipeline: use the Kaggle Spotify Tracks public dataset, migrate all PySpark transformations in /spotify-analytics/ to dbt on Databricks/Spark, preserve the ML feature engineering logic (popularity tiers, mood classification, audio profile scores), add schema tests and unit tests, generate an Airflow DAG with SLAs and alerting, and validate semantic equivalence of the outputs. - Use the Knowledge Hub to guide LLMs in building Snowflake UDFs with best practices, examples, and auto-generated documentation. +![Spotify Analytics Pipeline](../assets/images/spotify_analytics.png) +--- -
+## US Home Sales Data Science Dashboard + +`Data Science` `K-Means` `OLS Regression` `R/ggplot2 Aesthetic` + +**Prompt:** + +> Download all available public US home sales data sets. Process and merge them into a unified format. Perform advanced data science on it to bring to the surface interesting insights. K-means, OLS regressions, and more. Build a single interactive dashboard with data science style charts, think violin plots, Q-Q plots and lollipop charts. Use a R/ggplot2 aesthetic. No BI style charts. + +![US Home Sales Dashboard](../assets/images/us_home_sales.png) + +--- + +## Snowflake vs Databricks Deployment Benchmark + +`Snowflake` `Databricks` `Benchmarking` `Cost Analysis` + +**Prompt:** + +> The NovaMart e-commerce analytics platform in the current directory is ready for deployment. Deploy to both Snowflake and Databricks, testing multiple warehouse sizes on each platform (Snowflake: X-Small, Small, Medium; Databricks: 2X-Small, Small, Medium SQL Warehouses) to find the optimal price-performance configuration. Run the full data pipeline and benchmark queries (CLV calculation, daily incremental, executive dashboard) on each warehouse size, capturing execution time, credits/DBUs consumed, and bytes scanned. Generate a cost analysis document with a recommendation matrix showing cost-per-run for each platform/size combination, and recommend the single best platform + warehouse size for production based on cost efficiency and performance. + +![Snowflake vs Databricks Benchmark](../assets/images/dbrx_snowflake_benchmark.png)