Help Needed: CPU Version of DGL Installing GPU Packages in Docker

Hello everyone,

I’m working on building a Docker image for AWS Lambda using the following base image and setup:

FROM public.ecr.aws/lambda/python:3.9
ENV DGLBACKEND=pytorch

# Install the dependencies
RUN pip install boto3
RUN pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cpu --target "${LAMBDA_TASK_ROOT}"
RUN pip install dgl -f https://data.dgl.ai/wheels/torch-2.4/repo.html --target "${LAMBDA_TASK_ROOT}"

To keep the image size manageable, I am explicitly installing the CPU-only version of DGL (from data.dgl.ai), expecting only CPU dependencies. However, during the installation of DGL, several GPU-related packages are also being downloaded (e.g., nvidia-cublas-cu12, nvidia-cudnn-cu12, nvidia-cusparse-cu12), which increases the image size significantly. Here’s a snippet of the installation log showing the unexpected GPU packages being pulled in:

Installing collected packages: pytz, mpmath, urllib3, tzdata, typing-extensions, tqdm, sympy, six, pyyaml, psutil, packaging, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, annotated-types, triton, scipy, requests, python-dateutil, pydantic-core, nvidia-cusparse-cu12, nvidia-cudnn-cu12, jinja2, pydantic, pandas, nvidia-cusolver-cu12, torch, dgl

Could anyone advise on why these GPU packages are being installed despite selecting the CPU-only version? Any tips on resolving this or avoiding these unnecessary dependencies?

Thanks in advance for your help!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.