Share This Page On Facebook Share This Page On Twitter

Artificial Intelligence on the Wrong Track

Posted on July 27, 2019 by Grant Tafreshi

The Artificial Intelligence community didn't grasp the energy of your brain, probably the most powerful intelligence in the universe, since they used computational models. They wrongly believed that intelligence was the achievement of life goals through computation. The AI study was set in place by the arrival of computers in the 1940s, on the essential premise that the mind did some type of computation. Alan Turing was among the first to focus on intelligent machines by programming computers. Algorithmic procedures did enable programs to attain striking results. Computers could solve complex mathematical and engineering problems. Several scientists even believed a large enough assembly of programs and collated knowledge could achieve human level intelligence.

While there may be other possible ways, computer programs were the very best available resource for wanting to simulate human level intelligence. But, in the 1930s mathematical logicians, including Turing and Godel, established that algorithms cannot be guaranteed to resolve problems using mathematical domains. Both theory of computational complexity, which defined the issue of general classes of problems and the AI community didn't identify the properties of problems and problem solving methods, which enabled humans to resolve problems. Every direction of search appeared to lead and then dead ends.

The AI community cannot design a machine, that could learn and be significantly intelligent. No program could learn much by reading. Computers might use vast computational capabilities to play chess at grandmaster level, but their intelligence was limited. Parallel processing computers looked promising, but proved difficult to program. Computer programs could only solve domain specific problems. They might not distinguish between problems, or be considered a "General Problem Solver." Since humans could solve problems in unique domains, Roger Penrose argued that computers were intrinsically not capable of achieving human intelligence. The philosopher Hubert Dreyfus also suggested that AI was impossible. But, the AI community continued its search, despite the fact that most researchers felt the necessity for new fundamental ideas. Ultimately, the overall consensus was that computers were only "somewhat intelligent." So, was the essential definition of "intelligence" itself wrong?

Since a lot of human intelligence was little understood, it had been impossible to define a specific computational procedure to be intelligent. Intelligence was clearly an capability to solve problems. In nature, it had been a matured intelligence,which empowered the "homeostasis" of animals in the survival process. Homeostasis was the power of an entity to operate normally, achieving a comparatively constant state in the body, in a changeable, as well as hostile, environment. It had been a smart process, internally maintained by the animals at many levels, through various sensing, feedback and control systems, supervised by way of a hierarchy of control centers. This technique, attained by even the cheapest animal was the best "General Problem Solver." The procedure had not been domain specific. It recognized problems and responded with effective motor activity. It put on every part of survival.

The nervous system received a kaleidoscopic mix of trillions of sensory inputs. A phenomenal memory enabled it to keep in mind and identify patterns. Intuition, an algorithmic process, enabled it to isolate the context of an individual pattern from the galactic memory. The machine could identify objects from an incredible number of received sensory inputs. That pattern recognition ability had not been limited by the identification of static objects. It might identify problems. It recognized and interpreted dynamic events to create patterns of emotions. Emotions clearly defined problems. Animals recognized the difference between an agreeable nudge and a deadly slither and responded. Fear, anger, or jealousy motivated them. Each motor response had a specific sequence of problem solving steps, that have been, again, remembered patterns of activities.

The environment presented the machine with an incredible number of enigmatic phenomena. Several were due to other phenomena. Most problems were patterns of events, which had contextual links to remembered successful problem solving strategies. Pattern recognition enabled identification. The procedure had not been domain specific. It straddled the complete problem solving domain. Pattern recognition merely identified the hyperlink between one phenomenon and another. Intuition instantly identified the contextual link. It didn't identify the complex reasoning links between your two. It didn't use incremental logical steps to resolve problems. When primitive man took shelter because the storm clouds advanced, he was merely giving an answer to a perceived pattern.

Across a large number of years, mankind responded adequately to a lot of nature, without understanding underlying causes. That intelligence had not been computation, which reasoned its way through life, by analyzing the logical and mathematically precise links between particular causes and their effects. The reason why behind causes were discovered only later, with advanced study and research. Such analysis benefited just a minor segment of the issue solving world. Several symptoms linked to an illness. Physicians identified illnesses, without always knowing the logical or reasoned links between your symptom and the condition. Software code was logical. But, many quirks of complex code were patterns of effects, linked to particular programming events, that could only be acknowledged by a pattern recognition intelligence. Complex problem solving was achieved through sensitive pattern recognition. True intelligence was this powerful pattern recognition capability, which also, incidentally, discovered logic, reasoning and mathematics.