We can make the process of finding nodes (e.g., metrics) with common dimensions a lot more efficient.
These are things we currently do that are inefficient:
- Sequential dimension processing: each dimension is processed one at a time with full graph traversal before moving to the next.
- The
get_nodes_with_dimension function does BFS traversal making multiple DB queries per iteration.
- Nodes are loaded one at a time in the BFS loop rather than in batches.
- Set intersection only happens after fully processing each dimension