Currently, only shared-memory graph store server is supported, so
store_type
can only be “shared_memory”.
I look into the code of the graph_store.py
and found that the server only supports shared memory.
For giant graphs, is NUMA necessary? We prefer K8S or Yarn to schedule our tasks. Any distributed solution for giant graphs (over 100 million nodes)?