Skip to content

Solution - CUDA capability sm_86 #8

@int-rnd

Description

@int-rnd

So I had some limited access to some Tesla A10s - they are of course sm_86 and not compatible...

But if you want to make them work just add compatibly and build your own docker image locally.

Dockerfile

FROM devforth/gpt-j-6b-gpu
RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 -U
CMD uvicorn web:app --port 8080 --host 0.0.0.0

Then run
docker build -t CustomImageName .
and run
docker run -p8080:8080 --gpus all --rm -it CustomImageName

Credit to this project and https://pytorch.org/get-started/locally/

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions