The development of artificial intelligence led to some important questions, which the scientists involved in this study have been forced to consider. Mainly, what exactly constitutes an artificial intelligence and-if total self-awareness in a machine could be achieved-would that machine then be alive in and of itself?
Over the years, some generalized definitions of AI have appeared, which lump the more detail-oriented characteristics of the field into two broad categories. Both categories operate under the assumption that an AI is able to perceive its relative environment, as well as take the necessary steps to maximize its potential for success. Thus, even the weakest form of AI has to be able to learn to some degree.
This definition focuses on AI design, which is intelligent in the sense of problem solving and knowledge-based reasoning. There is no need for weak AI to be self-aware or possess any of the traits that are attributed to a sentient mind. Function is the focus of weak AI-function in an efficient and evolving manner. This type of AI is already seen in expert systems, along with other programs that can adapt but still must rely on human input.
This is the humanlike AI. It can think for itself, act creatively and question its own existence. The exact limit of the strong AI’s human capacities is up for debate, but the list includes empathy, intuition, and the ability to philosophize and wax nostalgic. At this point, strong AI remains in the realm of science fiction, though steps are being made every day that can bring scientists closer to emulating at least some human properties within a machine.
The Problems with Testing for Intelligence in Strong AI
The Turing Test was the first attempt at being able to define and recognize an intelligent mechanical device. It raises a question: Is imitation actually life? Turing would say yes. If a computer can act like a human and fool other humans into believing that it is intelligent, then it is indeed intelligent. This debate has given rise to a number of questions and criteria for determining sentience over raw computing power and knowledge.
Is Humanity Esoteric?
Many claim that human intelligence does not arrive from the application of formal operations within the brain, but instead comes from unconscious processes such as instincts. While this argument can also encompass such metaphysical concepts as the soul, it is more formally addressed at the level of self-awareness and emotion.
The opposite camp claims that our mind works in a manner which adjusts to situations and produces the appropriate responses; emotion and other esoteric phenomena are merely external manifestations of these processes. Their assumption is that creativity, self-awareness and emotion will all come about as by-products of a properly formed machine consciousness.
If a computer can be programmed with enough information and given the ability to use that information to create an infinite number of possible responses, does it have what it takes to be human? Or does the human mind boil down to more than just routines and sub-routines operating in the brain and giving the illusion of sentience?
The Humanity Checklist
In order to be human, there are certain requirements that one must meet. A human must be able to reason, make judgments, possess common sense, be able to perceive the passage of time, and take appropriate action (or inaction) based on that perception. A human must be able to learn from their mistakes and successes. Some would add that a human must also be able to sense the world through sight, sound, touch, etc., and that a body is required in order to replicate these experiences properly. A few of those issues are listed below.
Artificial intelligence, above all else, needs to be able to learn. Learning from mistakes is something that is fundamentally human and shapes the way people view the world. If an AI is only given a limited amount of knowledge and must ask questions in order to learn more, then this margin for error exists. Moreover, the AI must be able to recognize failed behavior and be able to make corrections on its own if necessary.
Expectations as Opposed to Direct Analysis
Studies have shown that human beings process their environment through a subtle recognition of what they perceive and what they do not perceive. When something in the environment changes, a human will immediately know-subconsciously-that something has changed. Only then does the process of discovering what has changed takes place. If a computer, with its highly powerful sensory input and instantaneous calculations of its environment, recognizes the changed element at the same time as the change itself, does that go against humanly qualities?
Life Requires a Body
Some do not believe that life is possible outside the realm of a biological or chemical medium. This means that any machine-based AI would only be a representation of the mechanics of life and not true life at all. Even a robot with all the workings of a human mind and the full range of motor capabilities and sensory devices would only be an imitation.
Life Requires Experience
Some say that humanity advances from historical experience. If human experience and interaction is the key to sentience, then AI would be forced to go through the same cycles as humans do: from birth, to infancy and into adulthood. If the computer learns quicker than humans do then perhaps this time frame could be compacted. Some form of “living,” however, would be needed for life. This again leads back to the issue of whether or not an AI would need a body to develop properly.
Life Requires Quantum Physics
Another hypothesis suggests that consciousness may actually be the result of the operation of quantum mechanics. By this reasoning, a truly self-aware AI could never come about through algorithmic programming, no matter how smart it was. This hypothetical problem is being somewhat addressed in the field of Cellular Neural Networks, whereby the presence of an analog component allows external quantum variables to act upon the processes of the AI. It may be some time before this hypothesis even needs testing.
Is it Within the Power of Man?
Can a consciousness which does not even understand itself possibly create another consciousness on its own level? If AI were to manifest, would we even recognize it? These and thousands of other questions accompany the complicated study of artificial intelligence. Most of these questions will never be answered until something closer to real intelligence can be constructed.