August 1, 2018
The History of AI: An Innovation From the Fourth Century B.C.

Early Theories on Formalizing Reasoning
In recent years, artificial intelligence and machine learning are increasingly prevalent. But the concept of AI began more than 2,000 years ago.
Greek philosophers like Aristotle and Euclid developed theories to try to formalize human reasoning to simulate and mechanize it. Centuries later, other philosophers, like Ramon Llull (13th century) with his logic machines, and Leibniz, Hobbes, and Descartes (17th century) explored the possibility of systematizing reasoning through geometry and algebra.
Early in the 20th century, it seemed that AI would become possible through the development of mathematical logic. Scientists and mathematicians wanted to respond to the fundamental question: “Is it possible to formalize all mathematical reasoning?”
The answer was two-fold. On the one hand, they determined that mathematical logic had clear limits. On the other, within those limits, any form of mathematical reasoning could be mechanized.
In 1936, Alan Turing created the Turing machine, an invention that sparked scientific argument about the possibility of creating intelligent machines. During World War II, the first modern computers (ENIAC, Colossus) were built based on Turing’s theories. With them, mathematicians, psychologists, engineers, economists, and political scientists started to discuss the idea of creating an artificial brain.
During the Dartmouth Summer Research Project on Artificial Intelligence in 1956, artificial intelligence was formally defined as an academic discipline by John McCarthy. This is considered to be the true birth of AI.
Ups & Downs: The Golden Years & the First Winter
The period between the Dartmouth Summer Research Project on Artificial Intelligence and 1974 constitutes the golden years of AI. It was an age of exploration and discoveries. The programs developed during these years were awe-inspiring. Computers solved complex algebra problems, proved geometry theorems, and learned to speak English. The researchers showed great optimism, predicting that completely intelligent machines would be developed in less than 20 years. The money shower started, greatly accelerating research.
During the 70s, researchers realized that they had underestimated the difficulties of solving these complex problems. Their lofty optimism set expectations extraordinarily high, and investments vanished when the promised results weren’t achieved. That’s why the period between 1974 and 1980 is known as the first AI winter.
During the 80s, corporations around the world began to adopt a type of AI called “expert systems,” leading to another boom. Governments from Japan, the United Arab Emirates, and the United Kingdom restarted investments in AI research. Back propagation, a new training method for neural networks, was popularized. Based on those neural networks, new applications for optical character recognition (OCR) and speech recognition were successfully commercialized.

In the late 80s and early 90s, AI suffered a series of turbulent events. The first was the sudden collapse of the AI-specialized hardware market in 1987. IBM and Apple desktop computers had improved in speed and power, surpassing specialized, high-priced computers. The whole industry vanished overnight. Additionally, most of the impressive list of AI objectives established earlier in the decade remained unsolved. This was the second AI winter, which lasted until 1993.
By the end of the 20th century, the field of AI had finally achieved some of its oldest goals. Successful industrial applications of AI proved its real-world usage. This success was due in part to increased computing power and a focus on solving specific problems.
AI Today
During the first two decades of the 21st century, big data, faster computers, and advanced machine learning (ML) techniques increased AI’s economic impact across almost all sectors.