The shortest path, the traveling salesman, and an unsolved question

by Hari Balasubramanian

The Shortest Path

How does Google Maps figure out the best route between two addresses? The exact algorithm is known only to Google, but probably some variation of what is called the shortest path problem has to be solved [1]. Here is the simplified version. Suppose we have a network of nodes (cities, towns, landmarks etc.) connected by links (roads), and we know the time it takes to travel a particular link. Then what is the shortest path from a starting node A to a destination node D?

Graph

In the instance above, there are 4 nodes. The rectangles provide the link travel times. The B-C link takes 2 time units to travel; the A-D link takes 5; the C-D link takes 1; and so on. The five possible routes from A to D are: A-D; A-B-D; A-C-D; A-B-C-D; and A-C-B-D. The easily spotted shortest path is A-C-D, with a total length of 3. But what if a network has hundreds of nodes and links? It would be impossible to visually identify the shortest path. We would need an efficient algorithm. By that I mean an algorithm whose execution time on a computer stays reasonable even when the problem size – the number of nodes or links in the network – gets bigger.

In 1959, Edsger Djikstra published just such an algorithm. Djikstra's Algorithm doesn't simply look for all possible routes between the start and destination nodes and then choose the shortest. That kind of brute-force approach wouldn't work, given how dramatically the number of possible routes increases even with a slight increase in network size. Instead, Djikstra's Algorithm progressively explores the network in a simple yet intelligent way. It begins with the start node A, looks at all its immediate neighbors, then moves on to the closest neighbor, and from there updates travel times to all as yet unvisited nodes if new and shorter routes are discovered. I am fudging important details here, but this basic procedure of moving from a node to its nearest neighbor and updating travel times is repeated deeper and deeper in the network until the shortest path to the destination is confirmed. Wikipedia has a good animation illustrating this.

How fast does the algorithm run? Let's say there are V nodes. Then, in the worst case, Djikstra's Algorithm will take in the order of V x V steps to compute the optimal path. An algorithm like this that grows polynomially with the problem size is something we will call efficient (of course lower order polynomials, such as the square function, are preferable; V raised to the power 50 wouldn't be helpful at all). So a 10-node problem might take around 100 steps; a 1000-node problem will take 1000000 steps. This increase is something a modern day computer can easily handle. The algorithm might do much better in most instances, but the worst case is commonly used as a conservative measure of efficiency. There are faster variations of Djikstra's Algorithm, but for simplicity we'll stick to the original.

The Traveling Salesman (TSP)

Now consider a slightly different problem. We are still interested in the shortest route, but we want the route to be such that it starts at some node A, covers all other nodes in the network without visiting any of them twice, and finally returns to A. In other words, we are interested in the shortest all-city tour that starts and finishes at A. This is the traveling salesman problem (TSP). The person delivering the mail; the therapist traveling to different patient homes in the city; the truck dropping off supplies at different stores: all face some version of the TSP (though no one may think of it as that, and there may be other practical constraints). The TSP isn't simply restricted to people or vehicles touring destinations; it also arises in genome mapping, the sequence in which celestial objects should be imaged, and how a laser should tour thousands of interconnections on a computer chip.

What is the shortest tour in the simple 4-node instance we saw in the figure earlier? Suppose we have to start and end at A. Then there are six possible tours that reflect the order in which we visit the other three cities: A-B-C-D-A; A-B-D-C-A; A-C-B-D-A; A-C-D-B-A; A-D-B-C-A; and A-D-C-B-A. The shortest tours are A-C-D-B-A and A-B-D-C-A (each is the other in reverse), both with a total length of 7.

Things get very complicated if there are hundreds of nodes. Turns out that there is no efficient algorithm yet that can guarantee the shortest possible tour for large instances. The only algorithm that will work for sure is listing all possible tours and then picking the best. When there are 4 cities this is not a problem as only 6 tours have to be evaluated. When there are 11 cities, there are suddenly 3.6 million possible tours. When there are 23 cities, the number of possible tours is 51,090,942,171,709,440,000 (I got this number from William Cook's book [2]). Compare this with the 23 x 23 = 529 steps that Djikstra's Algorithm needs, in the worst case, to find the shortest path between any two nodes in a 23-city network. Our brute-force algorithm for the traveling salesman is terribly inefficient. We may still get a modern day supercomputer, working full time, to get us an answer to the 23-city traveling salesman. But listing all possible tours for a 100-city instance is beyond the scope of the best computing power currently available on the planet [2].

Newsweek

Now, there are much smarter algorithms that have successfully tackled large TSP instances. See above image and other examples at this website. In fact, an 85,900-city instance has been solved optimally – an astonishing achievement (though it took 84.8 years of computing time [2]). But no algorithm has been able guarantee finding the best tour for every large instance of the TSP. What works in one 86,000-city instance may not work for another. What Djikstra's Algorithm efficiently guarantees for every large instance of the shortest path problem, no one has been able to achieve for the traveling salesman.

The Unsolved P versus NP Question

So what is it that makes the shortest path “easy” to solve in a large network and the traveling salesman “hard”? Certainly both problems are easy to understand; a layperson can figure out answers for small instances. The intuition underlying both is also clear to everyone: to make use of short links as much as possible – with the occasional, unavoidable longer link thrown in – so that the final routes or tours, which sum up these component links, remain short. Both problems seem so closely related. It would seem that even large instances of the TSP, like the shortest path, should be solvable in reasonable time. Yet, despite more than 50 years of intensive research, no one has an an efficient algorithm yet!

In moving from the shortest path between two cities to the shortest all-city tour in a network, we have crossed an unproven but widely accepted “boundary” that currently separates “easy” and “hard” optimization problems. To understand what this means more formally, we'll have to redefine our problems as decision questions with “yes” or “no” answers.

Consider the decision version of the shortest path. Is there is a path from the start node to the destination node whose length is less than some value L? To answer this “yes” or “no” question for a large network, all we have to do is to run Djikstra's algorithm, figure out what the shortest path is, and whether its length is less than L. So with Djikstra's algorithm we can efficiently (1) find a solution if one exists, and (2) verify its correctness. All decision problems for which these conditions are met said to belong to class called P. The term P stands for Polynomial.

Contrast this with the decision version of the traveling salesman. Is there an all-city tour in the network whose length is less than some value L? If somebody provides us a candidate tour, we can easily verify whether the tour covers all cities and whether the length is indeed shorter than L or not. So checking the validity of a candidate tour is easy: we can design an efficient algorithm even when there are a large number of cities. But actually finding a tour whose length is less than L in a large network? Well, there is no efficient algorithm yet. All decision problems for which we can check a particular candidate solution's validity easily but struggle to find a solution efficiently are said to belong to a class called NP-complete. NP stands for the strange-sounding Non-deterministic Polynomial.

The decision version of the TSP is just one of many, many problems, relevant in practice, that fall in the NP-complete category – from Boolean logic problems, to processing of strings, to games and puzzles. Just one other example is the subset sum problem: given a set of integers that can be either positive or negative, is there a subset of them whose sum is 0? Notice how deceptively easy this problem sounds.

In the early 1970s, Stephen Cook and Richard Karp — and Leonid Levin working independently in the Soviet Union — showed a very deep result that inextricably tied together the fate of all these NP complete decision problems. They showed that all NP complete problems, although seemingly different in their manifestation — what could boolean logic and subset sum and the traveling salesman possibly have in common? — are fundamentally identical in their underlying structure. This remarkable unity implies that If you find an efficient algorithm for just one NP complete problem, you will have found an efficient algorithm for all such problems. Solve the subset sum problem or one of the Boolean problems efficiently and you will have solved the traveling salesman!

The unsolved question — one of the biggest in mathematics and computer science — is whether there exists such an all-conquering efficient algorithm (which would imply P = NP), or whether all the NP complete problems are somehow intrinsically harder than the problems in P (PNP). The Clay Mathematics Institute (CMI) has a million dollar reward for anyone who proves the result, either way:

“If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question. Typical of the NP problems is that of the Hamiltonian Path Problem: given N cities to visit, how can one do this without visiting a city twice? If you give me a solution, I can easily check that it is correct. But I cannot so easily find a solution.” [link]

In his wide-ranging essay on the P vs NP question, Scott Aaronson, a computational complexity researcher at MIT, poses a literary analogy: “What is it about our world that makes it easier to recognize a great novel than to write one, easier to be a critic than an artist?” One reason, he argues, is the explosive, exponential growth in the number of possibilities – the sheer number of paths that one can choose from. We may get lucky once in a while, but the vast majority of paths, although promising for some time, will not lead us to optimal end outcomes.

If our current model of computing using binary bits is incapable of dealing with such complexity, then is thre another type of computing device that can? Aaronson points out that even the qubits of quantum computing – implemented through particles which randomly spin up (1) or down (0) and both up and down simultaneously, enabling (as an example) 1000 randomly spinning particles to store an astonishing 21000 possibilities simultaneously – will still have trouble breaking the NP complete barrier. Indeed, Aaronson explains that while hypothetical computing devices that solve NP-complete problems efficiently can be constructed, they would require implausible kinds of physics – such as allowing time travel, or sending signals faster than the speed of light. This leads him to predict that “the hardness of NP complete problems…will be seen as a fundamental [physical] principle that describes part of the essential nature of the universe.”

I've overstretched myself in this last part: I know little to nothing about quantum computing and related physics, so I won't say much more – I am sure there are better informed 3QD readers who can comment.

References

1. “Google Maps — It's All Just One Big Graph”, link.

2. Much of my information on the TSP comes from William Cook's excellent, if somewhat technical, book, In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation. Companion website with lots of information about the traveling salesman is here.

3. Scott Aaronson, The Limits of Quantum Computing, original draft available here. The actual published piece, with nice, colorful visuals is here.