In our latest paper, we presented a new pooling method for GNNs, called **MinCutPool**, which has a lot of desirable properties as far as pooling goes:

- It’s based on well-understood theoretical techniques for node clustering;
- It’s fully differentiable and learnable with gradient descent;
- It depends directly on the task-specific loss on which the GNN is being trained, but …
- It can be trained on its own without a task-specific loss if needed;
- It’s fast;

The method is based on the minCUT optimization problem, which consists of finding a cut on a weighted graph in such a way that the overall weight of the cut is minimized. We considered a continuous relaxation of the minCUT problem and implemented it as a neural network layer to provide a sound pooling method for GNNs.

In this post, I’ll describe the working principles of minCUT pooling and show some applications of the layer.

Read more