Hi!
I need to use sparse matrix multiplication with float16/bfloat16 precision, but I got the following error:
“[09:33:53] /opt/dgl/src/array/cuda/./spmm.cuh:639: SpMMCoo doesn’t support half precision fow now. Please use SpMMCsr instead by allowing the graph materialize CSR/CSC formats.”
Is there a way to make it work? How can I use SpMMCsr?
Below a minimal example for reproduction:
import dgl.sparse as dglsp
import torch
dev = torch.device('cpu')
indices = torch.tensor([[0, 1, 1], [1, 0, 1]], device=dev)
val = torch.randn(indices.shape[1], device=dev).half()
A = dglsp.spmatrix(indices, val)
X = torch.randn(2, 3, device=dev).half()
result = dglsp.spmm(A, X)
print(result.dtype)
Torch version: 2.3.0+cu121
Dgl version: 2.3.0+cu121