The 60-Year Routing Problem Nobody Solved

Default Avatar
DeepPCB Team

In 1961, a Bell Labs engineer named C.Y. Lee published what would become one of the most cited papers in electronic design automation1. His maze routing algorithm promised something remarkable: a computer that could automatically find paths between points on a circuit board, freeing engineers from the tedious work of manually drawing traces. The paper sparked a research field, launched companies, and generated decades of optimism about the imminent automation of PCB design.

Over sixty years later, engineers still route boards by hand.

This isn’t a story about technology failing. It’s a story about a problem that refused to yield to the solutions that worked everywhere else. Understanding why illuminates something important about the current moment, when a new wave of AI tools promises, once again, to finally solve PCB routing. Some of that promise is real. Some of it echoes the same optimistic mistakes the field has made for half a century.

The Golden Age of Autorouter Optimism (1971-1990)

Lee’s original algorithm was elegant in its simplicity. Treat the PCB as a grid. Start at point A. Expand outward in waves until you hit point B. Trace back the path. The approach guaranteed finding a connection if one existed, and it ran on computers that had less processing power than a modern coffee maker.

The early results seemed to validate everything researchers hoped for. By the late 1970s, commercial autorouters such as those developed by Racal-Redac had begun to appear in PCB design systems, automating many of the routing tasks on simpler two-layer boards. Companies like Racal-Redac and later Cadnetix helped build the early commercial PCB CAD market around routing automation tools2. The narrative wrote itself: as computers grew more powerful, autorouters would handle increasingly complex boards until manual routing became as obsolete as hand-drafting blueprints.

The algorithms improved steadily through the 1980s. Researchers developed hierarchical approaches that could handle larger boards by dividing them into regions3. Channel routers borrowed techniques from Very-Large-Scale-Integration (VLSI) design. Rip-up-and-retry methods allowed tools to backtrack from dead ends and try alternative paths. Each advance pushed completion rates higher on benchmark problems.

But something troubling lurked beneath the progress metrics. Engineers who actually used these tools noticed a pattern: the autorouters performed well on simple boards and academic benchmarks, but struggled with real production designs. A board with unusual constraints, tight spacing requirements, or unconventional component placement could send an autorouter into hours of computation before producing a result that still required substantial manual cleanup.

The fundamental issue was that routing isn’t completely a pathfinding problem. It’s a resource allocation problem wrapped in geometry wrapped in physics. Lee’s algorithm could find a path between two points, but it couldn’t reason about whether that path would make subsequent connections impossible. It couldn’t anticipate signal integrity issues or thermal problems. It couldn’t understand that a particular trace placement would work electrically but create a manufacturing nightmare.

Still, the limitations seemed temporary. Moore’s Law was doubling computer performance every two years4. Surely raw computational power would eventually overcome these challenges.

Why Moore’s Law Didn’t Save PCB Routing (1990-2010)

The two decades surrounding the millennium should have been the era when autorouters finally caught up. Processing power increased by roughly a factor of 10,000 between 1990 and 20105. Memory became cheap enough to hold entire designs in RAM. Algorithms continued to improve, incorporating techniques from operations research, artificial intelligence, and constraint satisfaction.

And yet the gap between what autorouters could do and what engineers needed barely narrowed at all.

While compute followed Moore’s Law, PCB routing complexity grew combinatorially due to increasing density and constraint coupling. In 1990, a typical consumer electronics board might have four layers, a few hundred components, and traces measured in dozens of mils. By 2010, smartphones contained boards with twelve or more layers, thousands of components, and trace widths measured in single mils6.

Consider what happens when you add a single layer to a PCB. You haven’t merely increased the problem size by a fixed amount. You’ve introduced an entirely new dimension of routing possibilities. Every trace can now potentially travel through that layer, interact with other traces passing through it, and create new constraint relationships. The computational complexity grows not linearly, but exponentially with the number of layers.

The same applies to component density. As component density increases, routing difficulty grows faster than linearly because congestion and constraint interactions compound7. The additional components create new blockages, new constraints, and new interactions that ripple through the entire design. What looked like steady algorithmic progress in the 1980s was actually researchers climbing a ladder while the mountain grew taller beneath them.

This period also revealed the manufacturing constraint problem in full force. Early autorouters could largely ignore fabrication concerns because the manufacturing tolerances of the 1980s were generous compared to design requirements. By 2000, that margin had disappeared. A valid route wasn’t just one that connected two points without crossing other traces. It needed to maintain specific clearances that varied based on voltage levels, satisfy impedance requirements that depended on layer stackup, avoid acid traps that could cause etching failures, and respect dozens of other rules that varied by fabricator.

Some vendors responded by adding constraint systems that let engineers specify these requirements. But this created a new problem: the constraint specifications themselves became enormously complex. A modern design might have hundreds of rules, many of them conditional or interrelated. Autorouters could check whether a solution violated constraints, but they couldn’t reason about constraints in a way that guided them toward valid solutions.

The result was a tool category that occupied an uncomfortable middle ground. Autorouters were too sophisticated to abandon but too limited to trust. Engineers developed a workflow that persists today: run the autorouter to get an initial solution, then spend hours or days manually fixing the results. The tools saved some time but never delivered the autonomous operation their creators envisioned.

The Machine Learning False Starts (2010-2020)

When deep learning began transforming field after field in the early 2010s, PCB routing seemed like an obvious application. The problem had clear inputs and outputs. Millions of existing board designs could serve as training data. Neural networks had proven they could learn complex spatial relationships in domains from image recognition to game playing.

Researchers jumped in with enthusiasm. Papers appeared proposing convolutional networks that could learn routing patterns from existing designs. Reinforcement learning approaches framed routing as a game where an agent learned to place traces through trial and error. Generative models promised to produce complete routing solutions directly from component placements.

Almost none of it worked in practice. A 2024 keynote at the International Symposium on Physical Design8 surveyed the state of the field and concluded that, despite years of research, the “lack of research progress in PCB routing algorithms” had made routing “a bottleneck in overall circuit board design time.”

The first issue was training data. Unlike image classification, where a million labeled photos can be scraped from the internet, PCB designs are proprietary. Companies guard their board files carefully, and even when designs can be shared, they lack the annotations that would make them useful for supervised learning. A training set of finished boards doesn’t tell you which routing decisions were good and which were compromises forced by earlier mistakes. This is a structural disadvantage: the reinforcement learning approaches that later showed promise in chip placement at Google succeeded partly because Google had access to its own large corpus of internal chip designs to train on. Most PCB tool vendors and researchers simply don’t have this resource.

The second issue was generalization. Neural networks excel at learning patterns within a distribution, but PCB designs don’t form a coherent distribution. A network trained on consumer electronics boards performs poorly on industrial controllers. A model that masters four-layer designs might fail on eight-layer boards. The variations in component types, design rules, and application requirements meant that any practically useful system would need to generalize far beyond its training data, i.e exactly what neural networks struggle to do.

The third issue was verification. Even when ML models produced promising outputs, those outputs were effectively unauditable. A traditional autorouter might produce suboptimal results, but an engineer could trace through its logic and understand why it made specific decisions. A neural network’s routing decisions emerged from millions of opaque parameters. When the output contained errors, there was no way to diagnose whether the problem was training data, architecture, or fundamental limits of the approach.

The Chip Design Breakthrough (2021)

In June 2021, Google published a paper9 in Nature that seemed to change everything. Researchers had developed a reinforcement learning system that could place components on computer chips as well as or better than human experts, and it could do so in hours rather than weeks.

The paper landed like a thunderclap. If AI could master chip placement, a problem closely related to PCB layout, surely circuit board automation would follow quickly. The media coverage was breathless. Industry observers predicted rapid transformation.

Three years later, a more nuanced picture emerged. Google’s approach, while genuinely innovative, solved a different problem than the one PCB designers face.

Chip design and PCB design share surface similarities but differ in fundamental ways. Chips are designed once, manufactured billions of times, and the design tools and manufacturing process are controlled by a single organization. This means chip designers can afford massive computational investments during design because the costs amortize across enormous production volumes. Google reportedly used thousands of TPU hours to train and run their system. This resource commitment makes no sense for a PCB that might be manufactured in quantities of thousands rather than billions.

In modern chip design, foundries publish comprehensive design rule manuals that completely describe what is and isn’t allowed in a layout to ensure it can actually be fabricated. These design rules,  covering minimum widths, spacing, enclosures, density, and more, are enforced through design rule checking tools as an essential part of the physical design flow10. These rules are complex but finite and precise. PCB fabrication rules are neither. Every fab shop has slightly different capabilities. Rules that work at one vendor fail at another. Specifications that seem complete turn out to have implicit assumptions that only experienced engineers recognize.

Google’s system also solved placement, not routing. While related, these are different problems with different characteristics. Placement is a global optimization problem where the quality of a solution can be evaluated holistically. Routing is a sequential decision problem where early choices constrain later options in ways that are difficult to predict. The techniques that work brilliantly for one don’t transfer cleanly to the other.

The Nature paper’s real contribution was demonstrating that reinforcement learning could discover non-obvious solutions in physical design problems. But the path from that demonstration to practical PCB tools was longer and more complex than the initial excitement suggested.

The paper’s reception itself became a case study in the overpromise cycle the article describes. In 2023, researchers led by Igor Markov published a rebuttal in Communications of the ACM titled “Reevaluating Google’s Reinforcement Learning for Chip Macro Placement,” finding that a well-tuned simulated annealing approach (a decades-old technique) outperformed Google’s RL system while using less runtime. Nature published an addendum to the original paper in 2024. The episode is instructive: even in chip design, where the problem is better-defined and the resources more abundant, the gap between a compelling demonstration and a proven production tool turned out to be wider than it first appeared.

What Makes PCBs Uniquely Difficult

To understand why PCB routing has resisted automation for fifty years, you need to understand how it differs from similar problems that have been solved.

Routing in chip design is highly constrained: metal layers are often Manhattan-oriented, with each layer favoring orthogonal directions and switches between layers requiring vias11. In chip design, timing requirements are tightly specified and analyzed based on physical geometry and interconnect delays, enabling precise static timing analysis and optimization12. These constraints limit the solution space in ways that make the problem tractable.

PCB routing operates in an environment of irregular constraints and contradictory pressures. Traces can run at any angle (though some angles are preferred for manufacturing reasons). Components come in thousands of package types with different pin arrangements. Timing requirements depend on signal characteristics that vary based on fabrication variations that aren’t known until after manufacturing. Thermal behavior, mechanical stress, and electromagnetic interference create requirements that interact in ways that resist clean mathematical formulation.

The knowledge problem compounds the technical challenges. Effective PCB routing requires understanding that lives in the heads of experienced engineers rather than in specification documents. Which trace geometries will cause manufacturing problems? How will a particular design choice affect reliability over the product’s lifetime? What will happen when this board operates at temperature extremes? These questions don’t have algorithmic answers. They require judgment accumulated over years of seeing what works and what fails.

This tacit knowledge is why autorouters have always required human supervision. The tools can explore the solution space, but they cannot evaluate solutions against criteria that have never been formally specified. An experienced engineer glances at a routed board and immediately spots problems that would require pages of rules to formally describe. That intuitive evaluation remains beyond what any routing tool, automated or AI-powered, can replicate.

Where We Actually Are Today

The honest assessment of PCB routing automation today is more nuanced than either the skeptics or the optimists suggest.

The skeptics overlook how much has changed. Modern computational approaches, including machine learning, have delivered real progress on specific subproblems. Automated constraint checking is far more sophisticated. Specialized routing tools for dense BGA (Ball Grid Array) escapes and differential pair management are significantly better than a decade ago. The tighter integration of simulation with layout has also reduced the costly iteration cycles that once made routing painfully slow.

At the same time, claims that full automation is just around the corner overstate the reality. The core challenges that limited earlier generations of tools have not disappeared. They have been incrementally improved, not fundamentally eliminated. Altium’s own documentation still uses 80% autorouter completion as the benchmark for a well-prepared design. This is an implicit acknowledgment that the remaining 20% requires human judgment. Engineers continue to route critical sections by hand, supported by more capable assistants rather than replaced by them.

The most promising developments focus on augmentation rather than replacement. Instead of full automation, modern tools enforce constraints in real time, provide interactive routing assistance, and integrate simulation and verification so engineers can focus on higher-level decisions13. This pragmatic direction breaks with decades of chasing full automation and focuses instead on empowering engineers. Some tools are pushing further, using AI to handle larger portions of the routing task under human supervision. The results are promising but preliminary. We’re in an experimental period where the capabilities are genuinely new but the limits aren’t yet clear.

What probably won’t change anytime soon is the fundamental nature of the human-tool relationship. PCB design requires integrating electrical, mechanical, thermal, manufacturing, and economic considerations in ways that depend heavily on application-specific knowledge. That integration is what makes an experienced engineer valuable, and it’s what automated tools, whether traditional algorithms or modern AI, struggle to replicate.

The fifty-year routing problem was never really a routing problem. It was a knowledge representation problem, an optimization problem, a verification problem, and a human-computer interaction problem all tangled together. The researchers were not wrong about what was possible. They underestimated how tightly coupled the subproblems were and how long it would take for advances in each to compound.

The current generation of tools may finally be approaching the capabilities that researchers in 1971 imagined were just around the corner. But the engineers of 2071 may still need to understand their designs deeply, make judgment calls that can’t be automated, and take responsibility for whether a board works in the real world. Some problems don’t get solved. They get managed, gradually better, one generation of tools at a time.

The practical question for working engineers isn’t whether these tools will eventually handle most routine layout work. The trajectory of adjacent fields suggests they will. It’s whether the current generation has crossed the threshold from interesting research to practical utility, and if so, for which parts of the design. The answer, as explored elsewhere in this series, may follow a familiar pattern:  the electrically straightforward majority of a board is increasingly tractable, while the fraction that demands genuine expertise remains firmly in human hands.

The history of electronic design automation reveals patterns that apply far beyond circuit boards: optimism cycles, the gap between demos and deployment, and the persistence of problems that seem like they should have been solved long ago. The story continues, with outcomes that remain genuinely uncertain.

  1. https://www.semanticscholar.org/paper/An-Algorithm-for-Path-Connections-and-Its-Lee/6b9cbd70349aac279cb69ffb6017ee6504a729b9 ↩︎
  2. https://www.pcdandf.com/pcdesign/index.php/menu-research/pcb-design-history/7736-pcb-design-industry-timeline ↩︎
  3. https://dl.acm.org/doi/abs/10.1109/43.2182 ↩︎
  4. https://newsroom.intel.com/press-kit/moores-law ↩︎
  5. Karl Rupp — “40 Years of Microprocessor Trend Data”:
    https://www.karlrupp.net/2015/06/40-years-of-microprocessor-trend-data/ (with underlying data on
    https://github.com/karlrupp/microprocessor-trend-data) ↩︎
  6. https://www.sfcircuits.com/pcb-school/pcb-trace-widths ↩︎
  7. https://www.protoexpress.com/blog/hdi-pcb-routing-challenges also: https://s3vi.ndc.nasa.gov/ssri-kb/static/resources/High-Speed%20PCB%20Design%20Guide.pdf ↩︎
  8. https://dl.acm.org/doi/10.1145/3626184.3635285 ↩︎
  9. https://doi.org/10.1038/s41586-021-03544-w ↩︎
  10. https://www.egr.msu.edu/classes/ece410/demlow/files/Foundries%20and%20Design%20Rules.pdf ↩︎
  11. https://enicslabs.com/wp-content/uploads/2025/06/Lecture-9-Routing.pdf ↩︎
  12. https://www.researchgate.net/publication/251102871_Timing_Closure
    ↩︎
  13. https://resources.altium.com/p/pcb-design-process-the-eda-design-approach ↩︎