DGL installation with mixed precision option

Hi, how can I handle dgl library with mixed precision calculation?

When I install dgl with command “pip install dgl-cu111 -f https://data.dgl.ai/wheels/repo.html” .

And then I try to use mixed precision calculation, I got an error message " dgl._ffi.base.DGLError: [17:59:07] /opt/dgl/src/array/cuda/sddmm.cu:148: Data type not recognized with bits 16"

Is this an error of my code? or problem of wrong installation?

DGL installation does not have FP16 support. It is still in early experimental stage. That being said, you can probably try AMP to use FP32 kernels for DGL and FP16 in other places.

1 Like

For FP16 support, you need to compile from source following Chapter 8: Mixed Precision Training — DGL 0.7.1 documentation

1 Like

Actually, I have problem to compile from source, because of cluster environment setting…

May I ask ’ why FP16 support option is not allowed with conda or pip?’

We feel that FP16 support is not complete yet. We have yet to benchmark our implementation and the implementation of CPU FP16 is also not available yet. Therefore we choose to defer its introduction.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.