Can stop problem be solved in practice?

The stop problem can be explained as: given a program and an input to it, will it complete its execution and return a response or will it enter an infinite loop?

This was proved as Undecidable by Turing, unquestionable. The point is: it occurred to me a simple way to solve such a problem.

If at each state of execution of the program I do a complete dump of memory and registers, I will have the state of the program in any moment. From there it is enough to arrive if there was any state repetition. Since all the program can do is transition to the next state based only on the previous state, if the State repeated then it is immediate to infer that the program is entering an infinite loop.

Memory is finite, so there is a finite number of states. Any program that does not end will eventually repeat a state. I'm not saying that this is feasible, after all in a four-gig memory we have at least 2 high to 4000000000 States. I don't even know how many digits that has.

My question is: what is wrong? Clearly the problem is undecidable. Where does this method fail?

Author: Guilherme Bernal, 2014-08-29

3 answers

The idea is interesting, but the stop problem says that given a program Any and an input to the same... If you are limiting your program to run in a finite memory environment (be it 4GB), then you are necessarily restricting the scope of the original problem. For the program to be really arbitrary you need to remove this restriction, which makes your finite solution (though unviable, since your number has more than a billion digits) not up apply to the general case.

Now your question is whether for a specific case of a program it can be solved "in practice". In practice, you don't have anywhere to store all the memory dumps your solution needs, and it doesn't have the (almost) infinite time it requires either:)

 8
Author: carlosfigueira, 2014-08-29 00:28:39

In fact, your solution fails to catch simple cases. That is, any program that changes its state without going back is not detected as a possible cause of infinite loop.

Imagine the following program in Java:

public static void main(String[] args) {
    LinkedList<Integer> l = new LinkedList<>();
    while (true) {
      l.add(123);
    }
}

Every time the dump is done, it will be in a new state. Being in a new state, your algorithm to check if it has entered some previously known state goes down the drain. Nothing done. Removing memory limitations and considering addressing with variable/infinite bits, the linked list would grow to infinity without worrying about anything else in that world.

I believe your heuristic would enter a state following static code analysis. If you simply provide the black box of the program, maybe not much more than its automaton in assembly-like format, then we will have a bigger problem. The more basic the form of representation of the program, the closer it is to the Turing machine , the more difficult the static analysis will be.

Another (perhaps even more efficient) heuristic that computational service providers use is to make a timeout available for their execution. If you are dealing with a day-to-day problem, most likely you will be dealing with a class P problem or some heuristic to bring a satisfactory answer even if imperfect, so you need to run fast.

More questions complexes (such as graph isomorphism or even predicting the meta-stable structure of a protein given its amino acid sequence) actually require greater computational power and possibly the heuristic of timeout would give false positives to "eternal execution". This security measure is provided in high-performance computing environments such as the SLURM execution scheduler, to prevent someone from unintentionally passing an impossible-to-solve case and taking the full computational power of a research core. This heuristic is also applied in communication between two distinct machines, since hypothetically you can provide an input that would cause the other machine to go into infinite execution, through the socket read and write timeout.

 10
Author: Jefferson Quesado, 2019-04-22 17:27:28

If with each processor instruction, you dump the 4 Gb of memory (and also the caches, disk, and registers), any programs that are running will eventually repeat a state and you can prove this with the dump. After all, if the memory has a finite size, then the amount of existing states is finite. The stop problem applies to cases where memory is infinite, but that case is the one that has finite memory.

A yes problem stop is (theoretically) solvable in cases where memory is finite by doing exactly what you propose. However, one must look at this "theoretically" quite carefully.

In the worst case, you would have to go through all 2n memory States until it repeats, where n is the number of bits existing in memory as a whole. If n is a sufficiently small value (for some device with a few bytes of total memory), this until it is feasible. For one typical computer of a few gigabytes of memory, this is totally unviable. If we consider only the 4 Gb of memory and disregard the disks, the number of existing states would be 234.359.738.368 (4 Gb = 34,359,738,368 bits). This results in a number with more than 10 billion decimal digits!

Each Bit of existing memory doubles the amount of possible States and consequently doubles the time and power consumption required for go through all these states. With each byte of memory added, the amount of states increases by 256 times. Now, imagine having 34,359,738,368 bits, each doubling the amount of possible States(this is 4 Gb, and look that this is only the main memory of a computer already considered limited for these days, moreover we are disregarding the disks).

You just can't go through this whole bunch of states because it will take a while and spend a lot energy. Thermodynamic physics will impose barriers before you can finish this. So one of these things happens before:

  1. The computer is destroyed by forces external to it.

  2. The universe ends before you go through all the states. I don't know if it would be with Big Crunch, Big Rip, Big Freeze, Big Bounce, Heat Death or something else. But whatever the case, it would not be good for you. This implies hypothesis 1 above.

  3. You consume all the energy of the universe in order to create and store enough memory dumps (or even if it is just to keep the computer running by going through all these states). This implies hypothesis 2 above which in turn implies hypothesis 1.

Thus, it is not difficult to see that with a sufficient amount of memory, one of the three things listed above would occur (and would occur very early!) in this process.

 6
Author: Victor Stafusa, 2019-04-22 17:33:57