We're currently requiring inclusion of python-louvain for testing of the Louvain code. We run Louvain and then compare the results to the results of python-louvain.
This is not a particularly useful test, as Louvain is an approximation algorithm and none of the implementations take the same approach at optimization, so they all get different results.
The cuGraph implementation of Louvain is deterministic for a particular graph input. The C++ unit test validates the result against an expected result that we have constructed by hand and validated against other implementations as being a good approximation. This test is far more effective at identifying defects in the C++ implementation.
I would suggest that we compute a result for the python tests and just store the expected result, and then eliminate our dependence on python-louvain so it can be dropped.
We're currently requiring inclusion of python-louvain for testing of the Louvain code. We run Louvain and then compare the results to the results of python-louvain.
This is not a particularly useful test, as Louvain is an approximation algorithm and none of the implementations take the same approach at optimization, so they all get different results.
The cuGraph implementation of Louvain is deterministic for a particular graph input. The C++ unit test validates the result against an expected result that we have constructed by hand and validated against other implementations as being a good approximation. This test is far more effective at identifying defects in the C++ implementation.
I would suggest that we compute a result for the python tests and just store the expected result, and then eliminate our dependence on python-louvain so it can be dropped.