There aren't any good papers on it (yet)! Most multi-agent pathfinders have time as a parameter (two agents can't be in the same place at the same time) for e.g. robot warehousing, but they don't have "you cannot re-use the same path as another agent" which is what is needed for PCB autorouting. I am actively working on multi-head A* for escape routing so hopefully I will have an article about that at some point.
It is very difficult because candidates have paths associated with them rather than just X/Y position, this means that you need to be conscious of the memory usage of your queue.
I should also say, for sparse-obstacle high-trace-density routing we effectively use hyperparameters (with meta A*) and weighted "rule-of-thumb" cost functions and this gives pretty incredible results. This technique is less likely to work on escape routing because in settings where there are high numbers of non-trace obstacles, there are can be only a few ways to solve the problem, and it really is a search for an optimal solution rather than a search for any of the possible acceptable solutions.
I think the right PoV is that there is should be a suite of 100 algorithms, maybe 12-20 that we've already built, that perform various constructions or optimizations for routing. The user should be able to control the application of those algorithms at high and low levels with both constraint specifications and more explicit guidance (the same way a web developer specifies that a section should use "flex" or "CSS grid" layout algorithms)
The reason I say some algorithms are for construction and some are for optimization: Many algorithms are easier to implement given an already "solved" board or a solved construction. e.g.
1. We construct high-level global nodes and edges (construction)
2. We remove "useless nodes" (optimization)
3. We route global nodes (construction)
4. We optimize global node routes for reduced congestion (optimization)
5. We performed detailed routing within global nodes (construction)
6. We simplify detailed routes to use 45 degree angles and simple paths (optimization)
This is all to say, we don't currently model edge timing, RF interactions or ground planes, but I believe these can easily be added and adapted to our constructions. Many companies have custom manufacturing requirements or business requirements that can be difficult to convey in a pure constraint system, but given good constructions they should be able to alter or swap the algorithm accordingly, especially when the system is friendly to non-performant code through spatial caching.
RF and EMI is full of difficult problems and rules-of-thumb that often put rules at odds with each other. I'll certainly have more to say on this in the future.
constraints must be upheld through every stage, so if you have N algorithms and M constraints you need to think about M*N constraint implementations
take controlled impedance for example. if you have 100 algorithms which may modify a trace, you need to uphold the impedance constraint in 100 different pieces of code. eventually it becomes intractable to add new constraints
a similar effect is what drove the ossification of pytorch and tensorflow
I don't think it's necessary or possible to uphold constraints at each stage. Each construction is an abstract representation (e.g. global routing uses global nodes, there is no exact trace length in global routing, only an approximation). Constructions should measure the probability that each constraint can be satisfied rather than attempt to uphold the constraint exactly.
More generally, the issue with most autorouter designs is they try to uphold constraints and therefore can't effectively use abstract constructions. Without the abstract constructions, they can't spatially partition, distribute risk or execute efficiently.
A nice byproduct of spatial partitioning (and part of the reason why our autorouter results looks relatively normal, though certainly not perfect) is the optimization stages responsible for global routing and reducing congestion distribute "routing risk" more evenly across the board. This means it's very unlikely our autorouter will create any "crazy traces" that characteristic of most naive autorouters.
this makes constraints suggestions, not constraints. without hard constraints users will have to manually validate output in many cases, which limits a lot of the potential
that’s not to say there isn’t value in reducing the monotonous work, but it seems like a hard sell when altium or kicad could replicate without the added friction of context switching
has anyone tried applying multi-agent pathfinding to autorouting? seems like you could do significantly better than A*
There aren't any good papers on it (yet)! Most multi-agent pathfinders have time as a parameter (two agents can't be in the same place at the same time) for e.g. robot warehousing, but they don't have "you cannot re-use the same path as another agent" which is what is needed for PCB autorouting. I am actively working on multi-head A* for escape routing so hopefully I will have an article about that at some point.
It is very difficult because candidates have paths associated with them rather than just X/Y position, this means that you need to be conscious of the memory usage of your queue.
I should also say, for sparse-obstacle high-trace-density routing we effectively use hyperparameters (with meta A*) and weighted "rule-of-thumb" cost functions and this gives pretty incredible results. This technique is less likely to work on escape routing because in settings where there are high numbers of non-trace obstacles, there are can be only a few ways to solve the problem, and it really is a search for an optimal solution rather than a search for any of the possible acceptable solutions.
you’d want to adapt those parts of the algorithms to enforce edge timing and treat it as a 3d topological traversal before solving geometry
how do you handle RF interactions? do you have heuristics around splitting ground planes e.g.
I think the right PoV is that there is should be a suite of 100 algorithms, maybe 12-20 that we've already built, that perform various constructions or optimizations for routing. The user should be able to control the application of those algorithms at high and low levels with both constraint specifications and more explicit guidance (the same way a web developer specifies that a section should use "flex" or "CSS grid" layout algorithms)
The reason I say some algorithms are for construction and some are for optimization: Many algorithms are easier to implement given an already "solved" board or a solved construction. e.g.
1. We construct high-level global nodes and edges (construction)
2. We remove "useless nodes" (optimization)
3. We route global nodes (construction)
4. We optimize global node routes for reduced congestion (optimization)
5. We performed detailed routing within global nodes (construction)
6. We simplify detailed routes to use 45 degree angles and simple paths (optimization)
This is all to say, we don't currently model edge timing, RF interactions or ground planes, but I believe these can easily be added and adapted to our constructions. Many companies have custom manufacturing requirements or business requirements that can be difficult to convey in a pure constraint system, but given good constructions they should be able to alter or swap the algorithm accordingly, especially when the system is friendly to non-performant code through spatial caching.
RF and EMI is full of difficult problems and rules-of-thumb that often put rules at odds with each other. I'll certainly have more to say on this in the future.
constraints must be upheld through every stage, so if you have N algorithms and M constraints you need to think about M*N constraint implementations
take controlled impedance for example. if you have 100 algorithms which may modify a trace, you need to uphold the impedance constraint in 100 different pieces of code. eventually it becomes intractable to add new constraints
a similar effect is what drove the ossification of pytorch and tensorflow
I don't think it's necessary or possible to uphold constraints at each stage. Each construction is an abstract representation (e.g. global routing uses global nodes, there is no exact trace length in global routing, only an approximation). Constructions should measure the probability that each constraint can be satisfied rather than attempt to uphold the constraint exactly.
More generally, the issue with most autorouter designs is they try to uphold constraints and therefore can't effectively use abstract constructions. Without the abstract constructions, they can't spatially partition, distribute risk or execute efficiently.
A nice byproduct of spatial partitioning (and part of the reason why our autorouter results looks relatively normal, though certainly not perfect) is the optimization stages responsible for global routing and reducing congestion distribute "routing risk" more evenly across the board. This means it's very unlikely our autorouter will create any "crazy traces" that characteristic of most naive autorouters.
this makes constraints suggestions, not constraints. without hard constraints users will have to manually validate output in many cases, which limits a lot of the potential
that’s not to say there isn’t value in reducing the monotonous work, but it seems like a hard sell when altium or kicad could replicate without the added friction of context switching