Hmm, I think this works, but it has some caveats:
- Changes to \lambda during GGT recursion mean that transactions may move from the sink side of the partition to the source side; I believe this violates the condition that only sink-to-node and node-to-source capacities may change. If so, this means the factor n speedup on the worst case disappears.
- It means that the edges need to be expanded to their transitive closure (if A depends on B, and B depends on C, we need an edge from A to C as well). With complexities like \mathcal{O}(n^2 \sqrt{m}), this may be a slight worsening in practice, if we account for the fact that realistic clusters tend to have m = \mathcal{O}(n), but the full transitive closure may well have m = \mathcal{O}(n^2).
I do remember reading about specialized bipartite-graph min-cut algorithms, though…
I’ll do a writeup soon on the implementation and optimizations I came up with, but I’m still experimenting with a few ideas first (in particular, trying representing the edge data in n\times n matrix form, so bitsets can be used to match things more easily).