## algorithm - What is an NP-complete in computer science? - Stack Overflow

The P versus NP problem is a major unsolved problem in computer planningeits.cf asks whether every problem whose solution can be quickly verified (technically, verified in polynomial time) can also be solved quickly (again, in polynomial time).. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,, prize for the first. Overview. NP-complete problems are in NP, the set of all decision problems whose solutions can be verified in polynomial time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a non-deterministic Turing machine.A problem p in NP is NP-complete if every other problem in NP can be transformed (or reduced) into p in polynomial time. NP-complete problem, any of a class of computational problems for which no efficient solution algorithm has been found. Many significant computer-science problems belong to this class—e.g., the traveling salesman problem, satisfiability problems, and graph-covering problems. So-called easy, or.

## NP-Completeness | Set 1 (Introduction) - GeeksforGeeks

NP-completeness is a form of bad news: evidence that many important problems can't be solved quickly.

These NP-complete problems really come up all the time, *solving np complete problems*. Knowing they're hard lets you stop beating your head against a wall trying to *solving np complete problems* them, and do something better:. NP does not stand for "non-polynomial". There are many complexity classes that are much harder than NP. A simple path in a graph **solving np complete problems** just one without any repeated edges or vertices.

To describe the problem of finding long paths in terms of complexity theory, we need to formalize it as a yes-or-no question: given a **solving np complete problems** G, *solving np complete problems*, vertices s and t, and a number k, does there exist a simple path from s to t with at least k edges? A solution to this problem would then consist of such a path. Why is this in NP? If **solving np complete problems** given a path, you can quickly look at it and add up the length, double-checking that it really is a path with length at least k.

This can all be done in linear time, so certainly it can be done in polynomial time. However we don't know whether this problem is in P; I haven't told you a good way for finding such a path with time polynomial in m,n, and K. And in fact this problem is NP-complete, so we believe that no such algorithm exists. But as far as we know there is no algorithm that runs in polynomial time.

A standard assumption in cryptography is the "known plaintext attack": we have the code for some message, and we know or can guess the **solving np complete problems** of that message. We want to use that information to discover the key, so we can decrypt other messages sent using the same key. If you're given a key, you can test it by doing the encryption yourself, so this is in NP, *solving np complete problems*. The hard question **solving np complete problems,** how do you find the key?

For the code to be strong we hope it isn't possible to do much better than a brute force search. Another common use of RSA involves "public key cryptography": a user of the system publishes the product pq, but doesn't publish p, q, or p-1 q That way anyone can send a message to that user by using the RSA encryption, but only the user can decrypt it. Breaking this scheme can also be thought of as a different NP problem: given a composite number pq, find a factorization into smaller numbers.

One can test a factorization quickly just multiply the factors back together againso the problem is in NP. Finding a factorization seems to be difficult, and we think it may not be in P.

However there is some strong evidence that it is not NP-complete either; it seems to be one of the very rare examples of problems between P and NP-complete in difficulty.

We've seen in the news recently a match between the world **solving np complete problems** champion, Gary Kasparov, and a very fast chess computer, Deep Blue. The computer lost the match, but won one game and tied others. What is involved in chess programming?

Essentially the sequences of possible moves form a tree: The first player has a choice of 20 different moves most of which are not very goodafter each of which the second player has a choice *solving np complete problems* many responses, and so on. Chess playing programs work by traversing this tree finding what the possible consequences would be of each different move.

The tree of moves is not very deep -- a typical chess game might last 40 moves, and it is rare for one to reach moves. Since each move involves a step by each player, there are at most positions involved in most games. If we traversed the tree of chess positions only to that depth, we would only need enough memory to store the positions on a single path at a time.

This much memory is easily available on the smallest computers you are likely to use. Actually one must be more careful in definitions. There is only a finite number of positions in chess, so in principle you could write down the solution in constant time.

But that constant would be very large. The reason this deep game-tree search method can't be used in practice is that the tree of moves is very bushy, so that even though it is not deep it has an enormous number of vertices. We won't run out of space if we try to traverse it, **solving np complete problems**, but we will run out of time before we get even a small fraction of the way through.

Some pruning methods, notably "alpha-beta search" can help reduce the portion of the tree that needs to be examined, but not enough to solve this difficulty. For this reason, actual chess programs instead only search a much smaller depth such as up to 7 movesat which point they don't have enough information to evaluate the true consequences of the moves and are forced to guess by using heuristic "evaluation functions" that measure simple quantities such as the total number of pieces left.

If I give you a three-dimensional polygon e. Or is it knotted? There is an algorithm for solving this problem, which is very complicated and has not really been adequately analyzed. However it runs in at least exponential time.

One way of proving that certain polygons are not knots is to find a collection of triangles forming a surface with the polygon as its boundary. However this is not always possible without adding exponentially many new vertices and even when *solving np complete problems* it's NP-complete to find these triangles. There are also some heuristics based on finding a non-Euclidean geometry for the space outside of a knot that work very well for many knots, but are not known to work for all knots.

So this is one of the rare examples of a problem that can often be solved efficiently in practice even though it is theoretically not known to be in P.

Certain related problems in higher dimensions is this four-dimensional surface equivalent to a four-dimensional sphere are provably undecidable. Suppose you're working on a lab for a programming class, have written your program, and start to run it.

After five minutes, it is still going. Does this mean it's in an infinite loop, or is it just slow? It would be convenient if your compiler could tell you that your program has an infinite loop. However this is an undecidable problem: there is no program that will always correctly detect infinite loops.

Some people have used this idea as evidence that people are inherently smarter than computers, since it shows that *solving np complete problems* are problems computers can't solve. However it's not clear to me that people can solve them either. Here's an example:. We have **solving np complete problems** reason to believe it should be true, so the expectation among most theoreticians is that it's false.

But we also don't have a proof So we have this nice construction of complexity classes P and NP but we can't even say that there's one problem in NP and not in P. So what good is the theory if it can't tell us how hard any particular problem is to solve? Conversely if everything in NP is easy, those problems are easy, *solving np complete problems*.

So if we believe that P and NP are unequal, and we prove that some problem is NP-complete, we should believe that it doesn't have a fast algorithm. So the theory of NP-completeness turns out to be a good way of showing that a problem is likely to be hard, because it applies to a lot of problems.

But there are problems that are in NP, not known to be in P, and not likely to be NP-complete; for instance the code-breaking example I gave earlier. There are several minor variations of this definition depending on the detailed meaning of "small" -- it may be a polynomial number of calls, a fixed constant number, or just one call.

So "easier" in this context means that if one problem can be solved in polynomial time, so can the other. As an example, consider the Hamiltonian cycle problem. Does a given graph have a cycle visiting each vertex exactly once? Here's a solution, using longest path as a *solving np complete problems.* As a second example, consider a polynomial time problem such as the minimum spanning tree, **solving np complete problems**.

We don't actually have to call the subroutine, or we can call it and ignore its results. This seems like a very strong definition.

Why should there be a problem that closely related to all the different problems in NP? We prove this by example. One NP-complete problem can be found by modifying the halting problem which without modification is undecidable.

To be precise, this needs some more careful definition: what language is X written in? What constitutes a single step?

Also for technical *solving np complete problems* K should be specified in unary notation, so that the length of that part of the input is K itself rather than O log K. For reasonable ways of filling in the details, this is in NP: to test if data is a correct solution, just simulate the program for K steps. This takes time polynomial in K and in the length of program, **solving np complete problems**. Here's one point at which we need to be careful: the program can not perform unreasonable operations such as arithmetic on very large integers, because then we wouldn't be able to simulate it quickly enough.

To finish the proof that this is NP-complete, we need to show that it's harder than anything else in NP. Suppose we have a problem A in NP. This means that we can write a program PA that tests solutions to A, and halts within polynomial time p n with a yes or no answer depending on whether the given solution is really a solution to the given problem.

We can then easily form a modified program PA' to enter an infinite loop whenever it would halt with a no answer. If we could solve bounded halting, we could solve A by passing PA' and p n as arguments to a subroutine for bounded halting, **solving np complete problems**.

But this argument works for every problem in NP, so bounded halting is NP-complete. Most proofs of NP-completeness don't look like the one above; it would be too difficult to prove anything else that way. Recall that these relations are defined in terms of the existence of an algorithm that calls subroutines. Given an algorithm that solves A with a subroutine for B, and an algorithm that solves B with a subroutine for C, we can just use the second algorithm to expand the subroutine calls of the first algorithm, and get an algorithm that solves A with a subroutine for C.

In practice that's how we prove NP-completeness: We start with one specific problem that we prove NP-complete, and we then prove that it's easier than lots of others which must therefore also be NP-complete. Starting from the bounded halting problem we can show that it's reducible to a problem of simulating circuits we know that computers can be built out of circuits, so any problem involving simulating computers can be translated to one about simulating circuits, *solving np complete problems*.

So various circuit simulation problems are NP-complete, in particular Satisfiability, which asks whether there is an input to a Boolean circuit that causes its output to be one. Circuits look a lot like graphs, so from there it's another easy step to proving that many graph problems are NP-complete.

Most of these proofs rely on constructing gadgetssmall subgraphs that act in the context of the graph problem under consideration like Boolean gates and other components of circuits. ICS -- Dept.

### P versus NP problem - Wikipedia

NP-complete problems Search problems Over the past seven chapters we have developed algorithms for nding shortest paths and minimum spanning trees in graphs, matchings in bipartite graphs, maximum increasing sub-sequences, maximum ows in networks, and so . Sep 25, · It's hard to say if we're solving the problem in the same way as computers solve NP-hard problems - we still haven't figured out the mechanics of the processing units of our brains. But what IS known is that the human brain does confront a lot of. NP-Hard are problems that are at least as hard as the hardest problems in NP. Note that NP-Complete problems are also NP-hard. However not all NP-hard problems are NP (or even a decision problem), despite having NP as a prefix. That is the NP in NP-hard does not mean non-deterministic polynomial time. Yes, this is confusing, but its usage is.