Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions usecases/image-processing/Pipfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

[packages]

[dev-packages]
pylama = "*"
black = "*"

[requires]
python_version = "3.7"

[pipenv]
allow_prereleases = true
150 changes: 150 additions & 0 deletions usecases/image-processing/Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

61 changes: 17 additions & 44 deletions usecases/image-processing/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Airflow sample image processing pipeline with Airflow 1.10.12
# Airflow sample image processing pipeline with Airflow 2.0.2

This is an Ariflow verion of image processing workflow in [workshop](https://image-processing.serverlessworkshops.io/)

Expand All @@ -22,41 +22,19 @@ This is an Ariflow verion of image processing workflow in [workshop](https://ima

## How to Deploy

### 1. Create Rekognition collections
```
aws rekognition create-collection --collection-id image_processing
```
### 2. Create S3 Bucket to store Dags, Requirement.txt and plugins.
```
aws s3api create-bucket --bucket {bucket_name} --region {region}
aws s3api put-bucket-versioning --bucket {bucket_name} --versioning-configuration Status=Enabled

aws s3api put-public-access-block --bucket {bucket_name} --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
### 1. Create S3 Bucket and upload DAG, requirements.txt, and images

```
### 3. Copy requirements to S3
```
aws s3api put-object --bucket {bucket_name} --key requirements.txt --body dags/2.0/requirements.txt --output text
```sh
./setup.sh <bucket name>
```

Note down the version number from the last command. This will be used during next step.

### 4. Deploy the SAM template
```
### 2. Deploy the SAM template

```sh
sam build
sam deploy --stack-name MWAA-image-processing -g

```

### 5. Copy dag and images

Replace TABLE_NAME with Stack Output.DynamoDBTableName and LAMBDA_FN_NAME with Stack Output.LambdaFunctionName in dags/image-processing.py. Copy the dag and images(to be tested) to the S3 Bucket created in Step 2

```
aws s3 cp dags/2.0/image_processing.py s3://{bucket_name}/dags/image-processing.py

aws s3 cp images s3://{bucket_name}/images --recursive

```

## Access Airflow UI
Expand All @@ -65,24 +43,19 @@ aws s3 cp images s3://{bucket_name}/images --recursive

- Trigger the Dag using the JSON given below

```
```json
{
"s3Bucket":"{bucket_name}",
"s3Key":"images/1_happy_face.jpg",
"RekognitionCollectionId":"image_processing",
"userId": "userId"
"s3Bucket":"{bucket_name}",
"s3Key":"images/1_happy_face.jpg",
"RekognitionCollectionId":"image_processing",
"userId": "userId"
}
```

## Useful cli commands while testing
```
aws rekognition list-faces --collection-id image_processing

aws rekognition delete-faces \
--collection-id image_processing \
--face-ids REPLACE_WITH_FACE_ID \
```

<!-- aws s3 cp requirement.txt s3://{bucket_name}/requirement.txt -->
## Useful CLI commands while testing

```sh
aws rekognition list-faces --collection-id image_processing

aws rekognition delete-faces --collection-id image_processing --face-ids '["faceid"]'
```
Loading