unfortunately I’m using databrick where I cannot access the installed directory, hence it might not be possible to check if the file is there or not.
For future reference, if someone is using databrick system, dgl works when downgrade to torch 2.1.0+cuda 11.8.
1 Like
I managed to resolve my issue. Basically (and I don’t know why) if we just use the requirements.txt to tell pip the packages to install it installs the wrong versions. Here is my final Dockerfile:
FROM public.ecr.aws/lambda/python:3.12
ARG PRIVATE_PIP_INDEX_URL
ARG PROD_REGION
ARG PROD_ACCESS_KEY
ARG PROD_SECRET_ACCESS_KEY
ENV DGLBACKEND=pytorch
# Install the dependencies
COPY requirements.txt .
RUN pip install torch==2.3.1 \
--index-url https://download.pytorch.org/whl/cpu
RUN pip install dgl -f https://data.dgl.ai/wheels/torch-2.3/repo.html
RUN pip3 install \
--index-url https://pypi.org/simple \
--extra-index-url "${PRIVATE_PIP_INDEX_URL}" \
-r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Copy our code
COPY *.py ${LAMBDA_TASK_ROOT}
# Set the handler function
CMD [ "evaluate_model.handler" ]%
1 Like
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.