Home » Technology » Neural Networks » The Conception of AI (Part One of Ten)

The Conception of AI (Part One of Ten)

Artificial intelligence is nothing unique to the 20th century. Cultures for thousands of years have been ascribing sentience to living objects. Animism, one of the very first forms of religion, says that everything has its own individual soul, whether it be a rock or a plant or something as intangible as the wind. What is unique to the period we live in, however, is that we have the potential to allow this otherworldly form of life to speak back.

The Modern Birth of Artificial Intelligence

In 1956 a computer and cognitive scientist by the name of John McCarthy made the first attempt to describe the technological idea that has since come to occupy the minds of scientists, psychologists and fiction writers everywhere. He named this new idea “Artificial Intelligence.” Though the first computers were little more than hulking beasts filled with vacuum tubes and reading their programs off of punch cards, the system was in place to make the first attempts to replicate the workings of the human mind. It was only two years later that McCarthy developed a programming language known as LISP – the first language of AI. LISP was the fundamental programming language for any working on AI development for many years and is still in regular use today.

Recognizing AI

Several years before AI had its name, Alan Turing was conceptualizing his famous Turing Test as a means to determine whether a machine could be as intelligent as a human being. He stated that it was possible, with study, to describe human intelligence so precisely that if a computer could be made to replicate the process good enough to fool another human being in a controlled test, then the machine was officially as smart as a person. The Turing Test has come under heavy fire since it was first proposed and even Turing himself admitted that the true definition of human intelligence was an uncertain thing. While flawed, it was the purpose behind the Turing Test that made it so important – there was a way to measure the first step in AI development.

Mad Scientists Everywhere

Once they had the tools, the scientists were like kids in a playground. Artificial intelligence programs were developed and tested like wildfire. Though no one seemed able to agree on what the best approach to creating an AI was, this worked to the advantage of the field and new ideas sprang up from constant trial, error and theory. In the first years of AI development, many amazing things took place. While some of them may seem mundane to those who have grown up with laptops and cell-phone, at the time they were major milestones in the field.

In 1956 the first chess program to beat a human being was developed. In the same year the Dartmouth Conference took place – the first ever conference devoted to the study and theory of artificial intelligence. Also in 1956, Newell, Shaw and Simon developed the Logic Theorist. This was what is considered to be the first AI program. It was designed for the purpose of proving theorems to demonstrate its problem-solving abilities. Much to everyone’s surprise, the AI not only proved most of the theorems but even managed to improve on one of them. To an outside observer this could be seen as the first signs of computer creativity. To the scientists that made the program it was the first sign that they were building something that could be smarter than them.

The Tide Recedes

After a time of furious activity and massive amounts of optimistic speculation, the honeymoon had to end. The 80s came and the promises that had been made by hopeful scientists of developing AI in a matter of a few decades were proving to be unrealistic. The lack of progress was ending the hype that had followed the field of AI research for more than two decades and funding was being pulled. The reasons for this are many, including insufficient computer power and insufficient manpower. The main reason for this setback, however, was fairly simple – scientists did not have enough information about human intelligence to know what they were trying to develop.

Survival of the Fittest

Eventually the AI developing world had to look realistically at what they were doing. They were nowhere near as close as they had originally thought to creating a human-level intelligent AI. What they did have, however, was the ability to create extremely intelligent systems as long as those systems were narrowly specialized. These Expert Systems were more practical and desirable to the general public (and to those providing the funding). Expert systems are often seen as a purely functional version of artificial intelligence and most of the programs that run today, from computer applications to automated assembly lines, are expert systems.

Into the Present

The science fiction fantasies of artificial intelligence were effectively shattered, at least for a time. The practical uses for intelligent programs appealed to everyone from the military to the marketplace. Funding quickly made its way in the direction where the money was and the development of an AI on the level of human intelligence was left for the odd fringe scientist and the curious college student. The learning program was relegated to purposes such as data mining, computer gaming and image processing.

Into the Future?

The desire to make a human-level thinking machine has never quite left the imaginations of computer programmers. Many factors contribute to a current rise in the interest surrounding the development of better AI. Hardware is stronger than it has ever been before and is only getting more powerful every year. The popularity of biologically-inspired programming such as neural networks and genetic algorithms has made learning and creativity in AI more plausible. Add to that mix an army of young and very skilled computer programmers who grew up on a diet of science-fiction movies and high-tech video games and you have the perfect recipe to bring imagination across the veil and into reality.

Check Also

The Latest Technology in Digital Cameras

The first digital camera was built in 1975 and a lot has changed since then. …