In 1950, Alan Turing proposed a decision criterion for intelligence validation in a computer. Most simply, if a human judge was incapable of deciding from two witnesses which was the computer and which was the human, the machine would have acquired artificial intelligence. Here I will argue that the Turing test has a fundamental problem, making it impossible to provide human intelligence validation. In fact, the test is undecidable and thus cannot be considered a valid methodology to test for artificial intelligence. This does not mean that human intelligence simulation in a machine is unattainable. It means that we need a general theory offering common characteristics of intelligent agents and specific metrics to test for it. A theory able to predict intelligence emergence independently of our own subjective appreciation about how a system socially interacts with us. If such a theory is attainable or within our reach, in the coming years, remains an open problem.