Skip to content

build: change custom build scripts to build production image#15

Open
rstol wants to merge 12 commits intotorchserve-23mt-v0.8.0from
build/production-image
Open

build: change custom build scripts to build production image#15
rstol wants to merge 12 commits intotorchserve-23mt-v0.8.0from
build/production-image

Conversation

@rstol
Copy link
Copy Markdown

@rstol rstol commented Nov 8, 2023

No description provided.

@rstol rstol requested a review from JeffWigger November 8, 2023 10:32
@JeffWigger
Copy link
Copy Markdown

I have changed the scripts such that they install the version of PyTorch that we actually want and tested it on DEV.
I will wait with merge until the torch serve PR is merged.

@pypae
Copy link
Copy Markdown

pypae commented Jan 23, 2024

I just built the cpu image on my Mac and pushed it to docker hub.

I also made the pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu tag multi-platform following the docker manifest docs:

Here's the commands I used:

# Build the arm image and tag it:
./build_custom_images.sh
docker tag textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-v2-cpu textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm
docker push textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm

# Pull the amd image and tag it:
docker pull textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu
docker tag textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-amd
docker push textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-amd

# Create a multi-platform manifest
docker manifest create textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu \
    textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-amd \
    textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm

# Annotate the arm version
docker manifest annotate textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm --arch arm64

# Push the manifest
docker manifest push textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu

@mpoemsl
Copy link
Copy Markdown

mpoemsl commented Mar 4, 2024

Note that I switched back to using the Dockerfile.dev since Dockerfile was just installing the factory torchserve version from pip.

@mpoemsl
Copy link
Copy Markdown

mpoemsl commented Mar 4, 2024

I just built the cpu image on my Mac and pushed it to docker hub.

I also made the pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu tag multi-platform following the docker manifest docs:

Here's the commands I used:

# Build the arm image and tag it:
./build_custom_images.sh
docker tag textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-v2-cpu textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm
docker push textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm

# Pull the amd image and tag it:
docker pull textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu
docker tag textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-amd
docker push textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-amd

# Create a multi-platform manifest
docker manifest create textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu \
    textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-amd \
    textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm

# Annotate the arm version
docker manifest annotate textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu-arm --arch arm64

# Push the manifest
docker manifest push textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-pt21-cpu

I have now created a multi-arch image textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-v3-cpu and a single-arch amd image textshuttle/pytorch-serve:torchserve-23mt-v0.8.0-v3-gpu from #16.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants