AI, the brain, and cognitive plausibility

3 min read

By Rich Heimann

artificial neural networks deep learning human brain

This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future.

Is AI about the brain? 

The answer is often, but not always. Many insiders and most outsiders believe that if a solution looks like a brain, it might act as the brain. If a solution acts like a brain, then the solution will solve other problems like humans solve other problems. What insiders have learned is that solutions that are not cognitively plausible teach them nothing about intelligence or at least nothing more than before they started. This is the driving force behind connectionism and artificial neural networks

That is also why problem-specific solutions designed to actually play to their strengths—strengths that are not psychologically or cognitively plausible—fall short of artificial intelligence. For example, Deep Blue is not “real” AI because it is not cognitively plausible and will not solve other problems. The accomplishment, while profound, is an achievement in problem-solving, not intelligence. Nevertheless, chess-playing programs like Deep Blue have shown that the human mind can no longer claim superiority over a computer on this task. 

Let’s consider approaches to AI that are not based on the brain but still seek cognitive plausibility. Shane Legg and Marcus Hutter are both a part of Google DeepMind. They explain the goal of artificial intelligence as an “autonomous, goal-seeking system; [for which] intelligence measures an agent’s ability to achieve goals in a wide range of environments.”  

This definition is an example of behaviorism. Behaviorism was a reaction to 19th-century philosophy of the mind which focused on the unconscious, and psychoanalysis, which was ultimately challenging to test experimentally. John Watson, professor of psychology at John Hopkins University, spearheaded the scientific movement in the first half of the twentieth century. Watson’s 1913 “Behaviorist Manifesto” sought to reframe psychology as a natural science by focusing only on observable behavior—hence the name. 

Behaviorism aims to predict human behavior by appreciating the environment as a determinant of that behavior. By concentrating only on observable behavior and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about the brain. In fact, to the behaviorist, intelligence does not have mental causes. All the real action is in the environment, not the mind. Ironically, DeepMind embraces the philosophy of operant conditioning, not the mind. 

In operant conditioning, also known as reinforcement learning, an agent learns that getting a reward depends on action within its environment. The behavior is said to have been reinforced when the action becomes more frequent and purposeful. This is why DeepMind does not define intelligence: it believes there is nothing special about it. Instead, intelligence is stimulus and response. While an essential component of human intelligence is the input it receives from the outside world, and learning from the environment is critical, behaviorism purges the mind and other internal cognitive processes from intellectual discourse. 

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled “Reward is Enough.” The authors argue that “maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence.” However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do. 

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how “cognitively plausible” some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you don’t care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didn’t solve the problem. 

About the author

Rich Heimann

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.

1 COMMENT

  1. I have solved problems by preventing a recurring precursor to the problem from happening.
    Then been told that correlation is not causation.
    I don’t know how to reply to those people other than ask them whether the problem is happening any more and they say, No, but we still need to understand what was causing the problem so we’d like you to undo your change that is preventing the problem so we can analyze it some more. I’m like… “No, that thing is crashing the systems and costing us actual cash when we don’t meet SLA.” And they’re like, “Well we can just do things that fix things, we need to understand why it crashes not just prevent it from crashing.” I’m like… “Dude no. Leaving this broken would be anti-pattern. Leave it fixed and find out why my change fixes it if you want to do something.” And they’re like “you’re fired”.

Leave a Reply to ross Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.