CHAPTER 5
UNSOLVABLE AND INTRACTABLE PROBLEMS
5.0 Examples of Unsolvable Problems
-----------------------------------
An unsolvable (or "impossible") problem is one for which there is NO algorithm
at all, not even an inefficient, exponential time one. Here are some examples -
in each case it has been PROVEN completely that there is no algorithm.
HALTING PROBLEM - Given an arbitrary algorithm and its input, determine whether
the algorithm will halt (terminate), or run forever in an "infinite loop". This
is the most infamous unsolvable problem.
EQUIVALENCE PROBLEM - Given two programs, test to see if they are equivalent.
VERIFICATION PROBLEM - Given the statement of a problem and a proposed program
for solving it, verify whether the program really does solve the problem (i.e.
check that a program is correct).
OPTIMIZATION PROBLEM - Given a program, find the fastest equivalent program
(unsolvable because the equivalence testing problem is unsolvable).
THEOREM RECOGNITION PROBLEM - Given a mathematical statement, test whether or
not it is a theorem (i.e. whether or not it is "true"). For example, test a
first order logical formula to see if it is valid (in all models), or test a
statement of number theory to see if it holds.
GRAMMAR PROBLEM - Given a language specified by some (context free) grammar,
find out if there is a statement in the language which can be parsed in more
than one way (i.e. find out if the grammar is ambiguous). Also, given two
grammars, find out whether they define the same language.
SECURITY PROBLEM - Given a security scheme for an operating system, test
whether it can be broken, just by using normal commands (e.g. "passwd", "chmod"
etc.) in normal ways.
WORD CORRESPONDENCE PROBLEM - Given two arrays of "words", such as
X = ["a", "abaaa", "ab"] and Y = ["aaa", "ab", "b"], find a word which can be
made up from the same sequence of choices from either array. For example, the
word "abaaaaaab", can be made up from the choices X2,X1,X1,X3, or from
Y2,Y1,Y1,Y3.
As an exercise, try showing that X = ["abb", "a", "bab", "baba", "aba"],
Y = ["bbab", "aa", "ab", "aa", "a"] also has a corresponding word, but that
X = ["ab", "baa", "aba"], Y = ["aba", "aa", "baa"] don't have one, and nor do
X = ["bb", "a", "bab", "baba", "aba"], Y = ["bab", "aa", "ab", "aa", "a"] .
TILING PROBLEM - Given a number of types of tile, decide whether any size of
room can be tiled using only the types of tile given, without turning any tiles
round, and with edges matching. There is a good discussion of this in the book
"Algorithmics" by Harel. Try showing that the tiles
---------- ---------- ----------
| GR | | | | BL |
| | |G | | G|
| | |R | | R|
| | | BL | | GR |
---------- ---------- ----------
can tile any size room, but that if you swap BL and GR on the bottom edges of
the last two tiles, then you cannot tile large areas. Note that one fact which
helps make the problem unsolvable is that there are sets of tiles which can
tile the whole plane, but not in any repeating pattern. The problem of tiling
any size room is equivalent to the problem of tiling the whole plane (though
the reason is not immediately obvious).
INTEGER POLYNOMIAL PROBLEM - Given a set of polynomial equations in several
variables, solve the equations for INTEGER values of the variables.
5.1 Unsolvability of The Halting Problem
----------------------------------------
The halting problem is the most famous and important unsolvable problem, and it
was the first to be proven unsolvable. There is NO algorithm for solving this
problem. If there were, then one could solve Fermat's Last Theorem (for
example) as follows.
Write an algorithm which searches for an integer solution to X^n + Y^n = Z^n
(for n >= 3) by running through all the possibilities, and then halts when it
finds one. This search algorithm does not solve the problem, because it might
run indefinitely. If we could solve the halting problem though, we would not
have to RUN the search algorithm, but merely TEST it to see if it would
terminate. This would tell us in a finite time whether or not there was a
solution to the equation (and if there was, THEN we could run the search
algorithm to find it).
To prove informally that there is no algorithm to test whether a program
halts, suppose there were. Then we could write an algorithm for testing whether
a program, when given its own source test as input, gets into a loop (i.e.
fails to halt) as follows:
To test whether program P loops when applied to itself
------------------------------------------------------
test whether program P, applied to itself, halts (use halting algorithm)
if we get the answer false, return true
if we get the answer true, go into a (deliberate) loop
Now imagine applying a program for this to itself. If it halts, it must be
because it doesn't halt, and if it doesn't halt, it must be because it does.
This is quite impossible, so there is no halting algorithm.
To make this a bit more precise, let's simplify things, and only consider pure
functions (in Pascal, say) which take a string as argument, return a boolean as
result, and which don't use any global variables or do any input/output. For
example
function long (s: string): boolean;
var n: integer;
begin
n := length (s);
if n > 42 then long := true
else long := false
end;
Problems involving only true/false answers are called decision problems, and
unsolvable decision problems are often called undecidable.
In Pascal, to allow the string to be of arbitrary length, it would have to
represented as a linked list of string-fragments. Now the source text of this
function can itself be treated as a string (with spaces instead of newlines if
you like - it makes no difference):
"function long (s: string): boolean; var n: integer; begin .... end;"
Now suppose the halting problem can be solved. Then there is a function
function halts (s: string): boolean;
....
begin
....
end;
which takes as argument the source text of a function, together with an
argument string for it, concatenated into a single string. For example, the
function "halts" would give the answer "true" on the following input string,
because the function "long" halts when applied to the string "abcdefghij".
"function long .... end;abcdefghij"
Now if we have a such a halting function, we can include it in a function
"loopsonself" as follows:
function loopsonself (s: string): boolean;
var ss: string;
result: boolean;
function halts (s: string): boolean;
....
end;
begin
ss := concat (s, s);
result := halts (ss);
if result = false then loopsonself := true
else while true do ss := ss;
end;
Here, concat is a function which we can assume to be built in which
concatenates two strings. The loopsonself function takes a source text of a
function, and tests whether applying that function to its own source text loops
(fails to halt). If so, it halts and returns true, and if not it loops. Now
let "lostext" be the text of this function:
lostext = "function loopsonself .... end;"
Now ask yourself whether loopsonself loops when applied to itself, i.e.
whether the following call halts or loops.
loopsonself (lostext)
If it HALTS, it must be because "halts (concat (lostext, lostext))" returns
false, which is if loopsonself, applied to itself, LOOPS. If it LOOPS, it
must be because "halts (concat (lostext, lostext))" returns true, which is if
loopsonself, applied to itself, HALTS. This is quite impossible, so the
halting function cannot exist, and the halting problem is unsolvable.
5.2 Unsolvability of The Equivalence Problem
--------------------------------------------
The proof of unsolvability for the halting problem is subtle and difficult. It
would be tedious to have to find the same kind of proof for other problems.
However, once we have one proven unsolvable problem, that makes it easier to
prove other problems unsolvable.
For example, take the equivalence problem - the problem of testing two programs
to see if they always give the same results. To show that equivalence is
unsolvable, we show that if it was solvable, we would be able to solve the
halting problem. In other words, we REDUCE the halting problem to the
equivalence problem as follows:
To test whether program P halts on input I
------------------------------------------
construct a new program Q like P but
always returning true instead of true-or-false
construct a trivial program T which always returns true
find out whether Q and T are equivalent (on input I)
if so Q halts, otherwise it doesn't
This shows that if we could solve the equivalence problem, we could use it as a
subroutine in the halting problem, so the equivalence problem is unsolvable.
Note that usually, an unknown problem is reduced to a known problem - in this
case a known problem (halting) is reduced to an unknown one (equivalence). This
is because we are trying to prove a negative result about equivalence.
5.3 Examples of Intractable Problems
------------------------------------
We are more concerned in this chapter with intractable problems - that is
problems which CAN be solved, but not within a reasonable time, because only
exponential algorithms are known.
A problem is TRACTABLE (or in class P) if it has a polynomial algorithm for its
solution. A polynomial algorithm is one which takes a time O(n) or O(n^2) or
O(n^3) or .... An intractable problem is one for which only exponential time
algorithms are known (time O(2^n) or O(n!) or ...). We will see later that we
can PROVE beyond reasonable doubt that a problem has no polynomial algorithm.
The technical term for problems which have such a proof is NP-COMPLETE.
Examples of intractable problems (actually all NP-complete) are:
THE TRAVELLING SALESPERSON PROBLEM - Given a collection of towns, and a network
of all the inter-town distances, (note that distance A->B need not equal B->A),
find the shortest route which visits every town once and gets back to where it
started. This is perhaps the most infamous intractable problem. The obvious
algorithm tries all possible orderings (permutations) of towns and finds the
length of each. This is an O(N!) algorithm if there are N towns, and is
virtually useless on problems with more than about 10 towns. All known
algorithms for finding THE shortest route are exponential. There are
applications to airline routes, oil pipeline layouts, job scheduling etc.
LOGICAL SATISFIABILITY PROBLEM - Given a formula of boolean algebra, e.g.
(x & y => z) & (x or ~y => ~z), find "true" or "false" values for the boolean
variables which satisfy the formula, i.e. make it true. This is the most
central problem in the theory of intractable problems. The obvious algorithm
is to try each possibility, "true" or "false", for each of the variables - at
least an O(2^n) algorithm for n variables.
LONGEST PATH PROBLEM - Given a general network (not a DAG), and two nodes A and
Z, find the longest path from A to Z which does not visit any node twice.
MULTI-MACHINE JOB SCHEDULING PROBLEM - Given a number of machines and a number
of jobs which take varying but known times, assign the machines to the jobs to
minimise the total time. This is intractable even with 2 machines where the
object is to split the jobs into two sets with an equal amount of work in each
set. Applies to factories as well as operating systems.
CODE GENERATION PROBLEM - Given a high-level program, generate the optimum
equivalent machine code in some sense (without changing the sequence of
operations, e.g. find the optimimum use of registers).
TIMETABLE PROBLEM - Given a number of courses and timetable slots, and some
pairs of courses which can't be given in the same slot because students may
take both, find a timetable which uses the least number of slots.
INTEGER LINEAR PROGRAMMING PROBLEM - Given a set of simultaneous linear
equations or inequalities, find an INTEGER solution maximising some function.
(Can be solved quickly for reals using the simplex algorithm).
DIGITAL CIRCUIT DESIGN PROBLEM - Given a boolean expression, find the minimal
circuit which calculates the expression.
DIGIAL CIRCUIT LAYOUT PROBLEM - Given a circuit, find the min number of layers
needed to put it on a chip, or the min area needed for a given number of
layers.
GENERALISED GAMES - Given a chess or draughts position (say) on an N x N board,
determine which player has the winning move. This suggests that no-one will
ever find a sneaky quick way of playing chess - intelligence with learning is
the only real answer.
It is important to be able to recognise intractable problems so that you do not
waste time looking for fast algorithms for them, but instead concentrate on
simpler sub-problems or on approximate (heuristic) algorithms, i.e. ones which
give a "good" answer rather than the optimum answer. We need some theoretical
result which tells us that a problem is intractable.
5.4 Polynomial Reducibility
---------------------------
There is no way of giving a 100% proof that a problem has only exponential
algorithms. However, we do have a proof "beyond reasonable doubt", based on the
idea of the RELATIVE complexity of two problems. We use the idea of "reducing"
one problem to another, in the same way that we reduced the halting problem to
the equivalence problem to show that the equivalence problem was also
unsolvable. This time though, we insist that the reduction be carried out in
polynomial time.
We say that a problem X (POLYNOMIALLY) REDUCES to a problem Y, written X <= Y,
if there is a polynomial algorithm for solving X which uses a (hypothetical)
algorithm for Y as a procedure. This shows that if there is a polynomial
algorithm for Y, then there is one for X, i.e. X is no harder than Y, and hence
the notation X <= Y.
For example, suppose we have a new undirected travelling salesperson problem
(UTSP) in which roads are two-way, so that the distance from A to B is always
the same as the distance from B to A. This would seem to be easier than the old
TSP.
The new UTSP is just a special case of the old TSP, since a two-way road
between A and B is the same as two separate one-way roads of the same length.
We can solve the new UTSP by using a procedure for the old TSP as follows. Take
the undirected network and replace each edge by a pair of directed edges of the
same length, then use the TSP algorithm on the resulting directed network.
This shows that the UTSP reduces to the old TSP, i.e. UTSP <= TSP.
UTSP PROBLEM USE TSP ALGORITHM ON
5
----->-----
5 / 5 \
A ----------- B A ------<------ B
It turns out that the old TSP can also be reduced to the new UTSP. We can take
a TSP network with directed edges and transform it (in polynomial time) into an
undirected network on which we can use a (hypothetical) UTSP algorithm. The
transformation is as follows.
Each town A in the TSP problem is transformed into three towns AIN, A, AOUT.
The distances from A to AIN and AOUT are very small, and distances from A to
anything else are very large. An old road which ran from A to B (say) now runs
from AOUT to BIN (and is two-way).
TSP PROBLEM USE UTSP ALGORITHM ON
AIN--A--AOUT
A / \ \
/|\ / \ \
/ | \ / \ \
^ ^ v / \ \
/ | \ / DIN---D---DOUT \
/ D \ / / \ \ \
/ / \ \ \ / / \ \ \
/ ^ ^ v \ / / \ \ \
/ / \ \ \ // \ \\
C -------<--------- B COUT---C---CIN----BOUT---B---BIN
An optimum route in the new undirected network must visit A (because it must
visit all the nodes) and when it does it must use the two roads AIN---A and
A---AOUT (because these are short and the other roads involving A are long).
Distances from AIN to BIN or AOUT to BOUT (say) are very large, so this forces
an optimum route to go (eg) --A--AOUT--BIN--B--, thus effectively using the
one-way road from A to B.
It is not hard to check that an optimum route in the UTSP problem on the
right corresponds to an optimum route in the TSP problem on the left. Thus the
TSP can be solved in polynomial time with only a single call to a procedure
for the UTSP, and TSP <= UTSP.
We have already shown UTSP <= TSP, so a polynomial algorithm for either
would lead to a polynomial algorithm for the other - they both have polynomial
algorithms or neither has.
If X <= Y, a fast algorithm for Y yields one for X. Also, it gives us a way
of proving that Y is intractable. If X is already known to be intractable,
then Y must be as well. We can show a problem to be intractable by reducing
a known intractable problem to it (try not to get this the wrong way round).
5.5 The Classes P and NP
-------------------------
We now define the class P (often written as a curly P) to be those problems
which can be solved in a polynomial time. We think of these as "easy" problems,
or at least as solved problems. As with unsolvability, it is convenient to
restrict ourselves to decision problems (ones with true/false answers).
We define the class NP to be (roughly) decision problems with exponential
algorithms, i.e. algorithms taking at most O(2^p(n)) for some polynomial, e.g.
O(2^(n^3)). In particular, the class NP excludes unsolvable problems or
problems which take O(2^2^n) time etc., but it does include most practically
important problems.
More precisely, NP consists of decision problems which can be solved with a
single exponential search for a structure with some property:
A general exponential search
----------------------------
search through an exponential number of structures
for each structure
test the structure in polynomial time
return "true" if any structure has the desired property
Alternatively, NP can be defined as the class of problems which can be solved
in polynomial time using UNBOUNDED PARALLELISM -- a processor may always assume
that there are free processors available for doing subproblems, and it returns
"true" if any of the processors it uses return "true". Unbounded parallelism is
totally impractical, because the number of processors grows exponentially. For
that reason, unbounded parallelism is often called NON-DETERMINISM, and NP
stands for "Non-deterministic Polynomial time problems".
The decision problem version of the TSP is, given a network of distances and a
target distance d, find a tour of length at most d. It is in the class NP
because we can search through all N! permutations of N towns, and for each one
we can find its length in polynomial time. Alternatively, we can use unbounded
parallelism -- the main processor sets of one processor to try all permutations
beginning with A, another to try all permutations beginning with B and so on.
The satisfiability problem is in the class NP because we can search through all
2^N possible sets of "true" and "false" values for N variables, and for each
set substitute into the boolean expression and evaluate it in polynomial time.
If a structure can be tested in polynomial time, then it must have polynomial
length, say q(n) bits, as there would otherwise not be time to examine it all.
Thus there are at most 2^q(n) structures, and the whole search can be done in
time O(q(n)*2^q(n)) which is O(2^p(n)) for some polynomial p (less than q^2).
Problems in P are automatically in NP. However, there is no proof (yet) that NP
contains any extra problems. In other words we could have P=NP. This problem of
whether P=NP or not is the most famous outstanding problem of theoretical
computer science. Everybody believes that P does not equal NP, otherwise ALL
the famous outstanding intractable problems would have fast algorithms.
We have a means of comparing problems (up to a polynomial) by polynomial
reductions, so we can now ask whether there is a HARDEST problem X in NP, i.e.
a problem in NP to which all other problems in NP can be polynomially reduced.
X A <= X
/ /|\ \ B <= X
^ ^ ^ ^ ^ ......
/ / | \ \
A B ...
Such a problem must then be intractable beyond reasonable doubt. Suppose we had
a polynomial algorithm for X. Then we would have a polynomial algorithm for any
problem A in NP. In other words everything in NP would be in P, i.e. P=NP. Thus
either P=NP (which nobody believes) or the problem X is intractable. This is
the best we can do until the P=NP question is solved. A problem to which all
problems in NP reduce is called NP-HARD. If the problem is itself in NP, it is
called NP-COMPLETE. We have the picture:
___________
/ \
/ \
| | NP-HARD
| ___________ |
\/NP-COMPLETE\/
/\___________/\
| |
| ___________ | NP
\/ P \/
\___________/
5.6 Cook's Theorem - Satisfiability Is NP-Complete
--------------------------------------------------
Cook proved that the logical satisfiability problem is a hardest problem in NP.
We have seen that it is in NP because we can search through the 2^n
possibilities for the values of the variables, and check satisfability for
each one in polynomial time by substitution. The proof rests on the fact that
given an arbitrary function, there is a boolean formula which expresses the
bits of the output as functions of the bits of the input.
To show that it is a hardest problem, we have to show how to reduce any
NP problem X to the satisfiability problem. We know there is a search algorithm
for X of the form:
search through an exponential number of structures
for each structure
test the structure in polynomial time
return "true" if any structure has the desired property
Thus we can write a polynomial procedure for testing a structure. It takes an
amount of time bounded by a polynomial in n, say p(n) (e.g. 10*n^3). The
procedure only has time to access at most p (=p(n)) variables, so we can assume
it uses variables V[1]...V[p], and we can assume they each have one bit (if
they have more, treat each bit as a separate variable), i.e. we can assume that
they are boolean variables. We can assume that the input to the problem (e.g.
towns & inter-town distances) is passed to the procedure in V[1]...V[n], and
that the structure to be tested (e.g. a permutation of towns) is passed in
V[n+1]...V[p]. We can also assume that the procedure returns the result in
V[1]. We can insist that the procedure checks that the structure is valid, i.e.
To test structure in V[n+1]...V[p]
----------------------------------
check that V[n+1]...V[p] contains a structure of the required kind
if not, return false
test the structure using information in V[1]...V[n]
return the result in V[1]
The search program can use this procedure by repeatedly calling the procedure,
once for each possible set of values for V[n+1]...V[p]. As an example, we will
take the problem X to be the TSP problem. In the TSP problem, V[1]...V[n] must
hold the number of towns N, all the inter-town distances, and the target
distance d. The test procedure would be
To test the length of a tour in V[n+1]...V[p]
---------------------------------------------
find the number of towns N from the information in V[1]...V[n]
check that V[n+1]...V[p] represents a permutation of towns 1...N
if not, return false
using the distances in V[1]...V[n], work out the length of the tour
check whether the length is no greater than d
return the result in V[1]
Imagine writing this procedure in some specific language and compiling it into
machine-code instructions (at most p of them), say C[1]...C[p]. We assume that
there are arithmetic instructions, e.g. "V[3] := V[2] & V[1]" and control
instructions, e.g. "if V[3] then goto 20" (goto 20 means execute C[20] next).
If the machine has any registers, we include them among the V[1]...V[p].
Now we create a boolean formula Q from the procedure code C[1]...C[p] and its
data V[1]...V[p]. The formula Q will be satisfiable if and only if there is a
structure of the required form (e.g. tour of length <= d). The formula Q will
be made up from the following boolean variables:
V[i,t] (1<=i<=p, 0<=t<=p)
C[j,t] (1<=j<=p, 0<=t<=p)
where V[i,t] means "V[i] is 1 at time t", and C[j,t] meaning that C[j] is the
instruction executing at time t. Thus there are O(p^2) variables in all.
The formula Q will be made up of six sub-formulas C, D, E, F, G, H, i.e. we
will have Q = C & D & E & F & G & H. The sub-formulas will have the following
meanings:
C: The variables V[1]...V[n] represent the input to the search problem.
D: Instruction C[1] is the first to be executed.
E: Only one instruction is executing at any one time.
F: The instruction at time t determines the one executed at time t+1.
G: The variables V[1]...V[p] are altered suitably by the instructions.
H: At the end, V[1] contains true.
The formula expresses the result V[1] as a function of the input structure
V[n+1]...V[p]. The value of the formula Q ("true" or "false") is completely
determined by the initial values of V[n+1]...V[p], i.e. the values of V[i,0]
for i>n. It is satisfiable if and only if there are initial values of
V[n+1]...V[p] which lead to a successful computation, i.e. if and only if there
is a structure with the required property. The sub-formulas are made up as
follows:
C: Supposing that the input values of V[1]...V[n] are 1,0,1,... Then
C is the formula:
V[1,0] & ~V[2,0] & V[3,0] & ...
which has n terms, one for each V[i].
D: This is just C[1,0].
E: This consists of a term of the form ~(C[j,t] & C[k,t]) for each pair of
instructions and each time t, i.e. O(p^3) pairs. For example, for time t=8
we have terms:
... ~(C[1,8] & C[2,8]) & ~(C[1,8] & C[3,8]) & ~(C[2,8] & C[3,8]) & ...
F: This consists of terms for each instruction and each time step, i.e.
O(p^2) terms. The terms depend on the type of instruction. For an
instruction C[2] of form "V[4] := V[5] & V[6]" at time t=8, say,
we would have
... (C[2,8] => C[3,9]) & ...
For an instruction C[2] of the form "if V[4] then goto 6" at time t=8
... (C[2,8] => ((V[4,8] => C[6,9]) & (~V[4,8] => C[3,9])) & ...
G: We have a term for each variable, each instruction and each time step to
indicate how the variables change, i.e. O(p^3) terms. If instruction C[2]
is "if...goto" then no variables change, and for V[4] and t=8 we have term
... (C[2,8] => (V[4,9] <=> V[4,8])) & ...
If C[2] is arithmetic, say "V[4] := V[5] & V[6]", then at time t=8 the term
for V[4] is different from the terms for the other variables. The term for
V[4] is
... (C[2,8] => (V[4,9] <=> V[5,8] & V[6,8])) & ...
and the term for any other variable, say V[5], is
... (C[2,8] => (V[5,9] <=> V[5,8]) & ...
H: This is just V[1,p].
The result of all this is a very large boolean expression of size O(p^3). It is
really just the result of compiling the testing procedure right down to the
level of individual bits. The size of the expression is polynomial in n (since
p = p(n) is polynomial in n) and can be constructed from the original problem
in polynomial time. If we apply an algorithm for the satisfiability problem to
this expression, it will return values for all the variables V[i,t], C[j,t]
which make Q true (if there are any). In particular it will find values for
V[n+1]...V[p] which lead to a valid successful computation.
To recap, we start with an arbitrary problem X in NP. We know there is a
polynomial algorithm for testing a structure which could be used in an
exponential search to solve X. Instead we use this algorithm to create a
machine code program and then a boolean expression. We can use a procedure for
solving the satisfiability problem to answer the question "is there a structure
with the desired property?" in polynomial time. Thus we have solved the
problem X in polynomial time using a procedure for the satisfiability problem:
-------------------------------------------------------------------
| Cook's Theorem: |
| Any problem in NP reduces to the satisfiability problem, i.e. |
| The satisfiability problem is NP-Complete. |
-------------------------------------------------------------------
Now provided P\=NP, this shows that satisfiability is intractable.
5.7 P, NP and Parallelism
-------------------------
Now we have defined the classes P and NP, P being easy problems, NP
being problems solvable with a single exponential search. We have shown
that there is at least one problem (the satisfiability problem) which
is NP-complete, that is as hard as possible within the class NP.
One question to ask is: do these definitions depend on the machine?
In particular, many people think (wrongly) that parallel computers can
solve NP-complete problems.
Let us imagine a parallel computer as powerful as possible. It has an infinite
array of processors filling 3-dimensional space. There is a minimum possible
volume V which a processor can take up, and each is essentially a computer with
a fixed finite store. The computation begins in one particular place, and as
time goes on, more and more processors can take part. However, the information
about the problem can only propagate at the speed of light C, so after time T,
the information is entirely inside a sphere of radius CT and volume O(T^3) in
which you can fit only N = O(T^3) processors. For a problem which takes time t
on a serial computer, the time taken by the parallel computer cannot be less
than t/N, as each processor is no more powerful than a serial computer.
T = t/N
T = t/O(T^3)
O(T^4) = t
and so T and t are polynomially related, and the definitions of P, NP and
NP-complete hold.
For example, suppose the TSP is solved by a serial algorithm in time
proportional to N! (N towns), and that it takes 100 seconds to solve a 100 town
problem.
For this problem, t=100, T=(100)^(1/4)=3 which seems good. The question to ask
though is how large a problem the parallel computer can solve in 100 seconds.
Now T=100, t=(100^4)=100000000, i.e. we have a million times as long. Now 103!
= 103*102*101*100! = 1000000*100!, so the parallel computer can solve a 103
town problem.
5.8 Showing Other Problems NP-Complete
--------------------------------------
We don't want to go through Cook's theorem again every time we want to show a
new problem is NP-complete. We can avoid this by going back to polynomial
reductions. We know that the satisfiability problem is NP-complete. If we can
reduce the satisfiability problem SAT to a new problem, say TSP, TSP must be
NP-complete, since any problem X in NP reduces (indirectly) to TSP:
X <= SAT <= TSP
Let us begin with a normalized satisfiability problem called 3SAT. There are
many normalized forms for boolean expressions, we describe one in which the
expression is a collection of clauses joined by &, and each clause
contains three items joined by "or", and each item is a single variable
or its negation, e.g.
(a or ~b or c) & (~a or b or d) & (b or c or d) & (~b or ~c or ~d)
There well-known methods for reducing a general boolean expression to
a normal form such as this. For example, the expression
(a -> (b&c)) & ((d or e) <-> f)
becomes, using an extra variable to represent each bracket,
B1 <-> (a -> B2) &
B2 <-> (b & c) &
B3 <-> (B4 <-> f) &
B4 <-> (d or e) &
B1 & B3
and now each line contains at most two operators and can be turned into normal
form separately:
(~B1 or ~a or B2) & (B1 or a) & (B1 or ~B2) &
(B2 or ~b or ~c) & (~B2 or b) & (~B2 or c) &
(B3 or B4 or f)&(B3 or ~B4 or ~f) & (~B3 or B4 or ~f) & (~B3 or ~B4 or f) &
(B4 or ~d) & (B4 or ~e) & (~B4 or d or e) &
B1 & B3
Terms with only one or two items can be eliminated or expanded to 3 items. The
above method is easily automated, though it does not give the most compact
answer. It is a polynomial algorithm reducing a satisfiability problem to a
3SAT problem. Thus 3SAT is NP-complete.
5.9 The Travelling Salesperson Problem is NP-Complete
-----------------------------------------------------
To show this, we reduce the normalized satisfiability problem 3SAT to the TSP,
i.e. we show 3SAT <= TSP. Since 3SAT is NP-complete, this shows that the TSP is
NP-complete. We start with an arbitrary normalized boolean expression, an input
to the 3SAT problem, e.g.
C1 C2 C3
(a or b or ~d) & (a or ~b or ~c) & (~a or b or c)
From this we create a TSP network. All the edges shown between nodes have
distance 1, and all other edges have infinite distance (2 is large enough). A
TSP problem where some edges have distance 1 and some are infinite is often
called a (Directed) Hamiltonian Cycle problem - see Horowitz and Sahni.
First we imagine an array with a row for each clause and a column for each
variable. We put three stars in each row corresponding to the variables which
make up the clause, and we put a box at the end of each row, and join them all
up as shown. The stars and boxes will be explained below.
a ~a b ~b c ~c d ~d
.-------- .-------- .-------- .------>-----
/ \ | / \ | / \ | / \ |
^ ^ | ^ ^ | ^ ^ | ^ ^ |
C1 * | | * | | | | | | * | |
| | | | | | | | | | | |
C2 * | v | * v | * v | | | |
| | | | | | | | | | | |
C3 | * | * | | * | | | | | |
| | | | | | | | | | | |
\ / | \ / | \ / | \ / |
. --------. --------. --------. |
| |
------------------------------<------------------------------
This will force an optimum tour (one which uses only short edges - the ones
shown) to go up one side or the other of each column and down through the
boxes. This will correspond to a choice of "true" or "false" for each
variable. The tour will have to visit the "stars" left out as well - and will
do so by putting in detours from the boxes. To arrange this, we replace each
star with a small 3-node network:
The star network The second column after replacing stars
^
|
.-------< .-----
/ \ / \
^ | | _ |
| v () |
\ / _ ()_ |
. or () | |_
/ \ ()_ | ()
^ | | ()_
| v | _ |
\ / () |
.------> ()_ |
| | |
^ \ /
-----.
In what ways can the optimum route pass through the star network? It must visit
the centre node. To do that, it must use EITHER the two upward edges, OR the
two downward edges. Thus it must use EITHER the four leftmost edges shown,
or the four rightmost ones. Now the boxes are replaced with 5 node networks:
box network
|
---------.--------
| | |
| A --.------------
| / /|\ | |
| / / | | | |
| * / | | | |
| / / | ^ | |
\ / / | | | |
\|/ | | | |
B . ^ * | |
/|\ | | | |
/ \ \ | | | |
| \ \ | ^ | |
| * \ | | | |
| \ \ | | | |
| \ \|/ | |
| C --.-------- |
| | |
---------.------------
|
All edges here are directed downwards, except for the three marked as being
directed upwards. The three stars shown are actually the three star networks
in the same row as the box. We already know that the optimum route must enter
this box at the top and exit at the bottom. From the top it goes to any one of
the triangle nodes A, B, C. Say it goes to A. From there it goes to B either
direct or via a diversion to one of the stars. Then it goes to C direct or via
a star network, and then it leaves at the bottom. It thus uses up any two of
the sides of the triangle ABC, and can divert to any two of the star networks
if required. Here is a picture of the triangle ABC in one of the box networks
and the way it is connected to the three star networks in its row:
The connections between star networks and box networks
---------------------->------------------
/ |
/ -----<---A.
/ ---------<------/---- /|
/ / / \ / |
| _ / | _/ | _/ \ / |
() \/ () () --->---B. ^
()_/\ ()_ ()_/ \ |
| \ | \ | \ |
\ \ \|
\ -------->----------------C.
\ |
-----------------------<------------------
An optimum route passes upwards through each column by one of the two paths,
passing through the star networks on that path. Any star network on the other
path must be visited by a detour from the box network at the end of its row.
Only two of the three star networks in a row can be reached by detours from the
box network, so at least one of them must be visited by an upwards path. Thus
an optimum route exists if and only if the formula which we started from is
satisfiable, and the route corresponds to a set of truth values for the
variables.
Thus if we have a procedure for solving the TSP, we can solve the
satisfiability problem 3SAT by converting the formula into a network and
calling the procedure once. We have reduced 3SAT to the TSP, i.e. 3SAT <= TSP.
Since 3SAT is known to be NP-complete, TSP is NP-complete. This type of proof,
using 3SAT is common.