IA-piano

The evolution of Artificial Intelligence: from imitation to human empowerment

Artificial Intelligence (AI) is a branch of computer science that allows the programming and design of systems, hardware and software, equipped with characteristics that are considered typically human. Transferring to machines, for example, visual perceptions, spatio-temporal and decision-making, we are not only talking about the simple ability to calculate or knowledge of abstract data, but also, and above all, of all those different forms of intelligence ranging from spatial intelligence to social intelligence, from kinaesthetic to introspective intelligence. To create an evolved system, in fact, we try to recreate one or more of these different forms of intelligence that, although often defined as simply human, in reality can be traced back to particular behaviors reproducible by some machines.

Modern AI was born after World War II, mainly thanks to the advent of computers, considered the ideal tool to reproduce the human mind.

Some of the first to work in this field of research, Warren McCulloch and Walter Pitts, in 1943 proposed a neural network model that was inspired by the functioning of the human brain. Each neuron, interconnected with others by synapses, could represent a binary state: “on” or “off”. Each artificial neuron receiving one or more inputs, which simulated the functioning of the synapses of the human brain, summed them to produce an output to be sent to other neurons of the network that would use it as their input. These single artificial neurons are still a standard unit of reference in the construction of neural networks, so as to be called McCulloch-Pitts neurons.  The most surprising hypothesis arising from this research was that these networks were able to implement learning processes. It took a few years to confirm this assumption: in fact, Donald Hebb in 1949 theorized a rule of modification of artificial synapses that made it actually feasible. In fact, according to Hebb, two neurons linked in a neural network repeatedly activated by the occurrence of a certain event, will tend to consolidate and strengthen their connection by developing a memory of the event that stimulated it.

Warren McCulloch e Walter Pitts, gli inventori del neurone artificiale
Warren McCulloch and Walter Pitts, artificial neurons inventors

At the beginning of the next decade Marvin Minsky created the first neural network computer, called SNARC. This machine was able to simulate a network of 40 neurons with the aim of reproducing the behavior of a mouse in a maze that, mistake after mistake, learns to take the right way out.

In 1950 Alan Turing, British hero of World War II for his work in decrypting Nazi coded messages, wrote the article “Computing machinery and intelligence”. Along with his lectures on the subject, the article brought prominence to this nascent field by determining that a machine could be considered intelligent if it passed the “Turing Test” or “Imitation Game”. The test consisted of placing a man in front of a terminal, through which he could communicate with two entities: another man and a computer. If the person communicating through the terminal could not distinguish who was the man and who was the machine, then the computer would pass the test.

ACE Automatic Computing Engine used for the Turing Test or Imitation Game

In 1956, AI obtained the status of a real scientific discipline: in that year at Dartmouth College (New Hampshire) was held a conference organized by the researcher John McCarthy that, in its manifesto, proposed for the first time the word “artificial intelligence”. McCarthy expressed the opinion that every aspect of intelligence could be described in such rigorous terms that it would be possible to program a machine capable of simulating them. The conference was designed as an opportunity to bring together and compare on this issue many of the first and most important researchers in the field.

At Dartmouth, researchers Herbert Simon and Allen Newell had the opportunity to present what can be considered the first AI program: Logic Theorist, designed by the two in 1955, was able to prove most of the theorems of Chapter 2 of “Principia Mathematica” by Russell and Whitehead. The same Newell and Simon developed in 1957 the General Problem Solver (GPS), in which they implemented an inferential process inspired by the way of reasoning of the human mind. GPS could act and manipulate objects within the representation of a room, such as reaching for an object lying on a table by stacking two chairs.

In the meantime, McCarthy worked diligently at MIT on writing a new programming language, called LISP, that would facilitate the creation of AI programs. In 1958 he described in his “Programs with common sense” the first example of complete artificial intelligence: Advice Taker, which had to be able to perceive the surrounding reality and represent it internally, so as to interact with it and respond to external stimuli.

The advancements achieved in that period were amazing: computers began to solve algebraic problems, to prove geometric theorems and to learn to speak English. Among the many applications in which early researchers believed that AI could provide essential contributions was machine translation, but it was on that terrain that the first failures and disappointments occurred.

At the height of the Cold War, organizations such as DARPA, a government agency of the U.S. Department of Defense, invested tens of millions of dollars in artificial intelligence projects at various universities, chasing the possibility of quickly and automatically translating scientific articles from Russian to English. All of these projects would later be abandoned for lack of results, and the failure of these initial attempts, sanctioned in a report to the U.S. government in 1966, resulted in government funding being cut to many AI research programs.

Meanwhile, between 1964 and 1966, Joseph Weizenbaum created ELIZA (reproduced here), one of the pioneering applications of natural language processing. The machine was able to simulate conversation with humans using simple substitutions and matching search rules. ELIZA is remembered as a pivotal step in the history of AI, because it was the first time a human-computer interaction was developed with the goal of creating the illusion of a conversation between humans. The effect that this program had on many people was disconcerting: in ELIZA’s simulation of a psychological session, the “patients” were completely immersed in the illusion, opening their souls and thoughts to the computer. This led Weizenbaum, years later, to express doubts about the morality of creating an AI.

ELIZA
Joseph Weizenbaum with ELIZA

In the same period, Ed Feigenbaum invented a type of Artificial Intelligence called “expert system”: expert systems used a set of logical rules derived from the knowledge of human experts in certain fields to automate very specific decisions. In 1965 he designed DENDRAL, capable of analyzing the chemical structure of organic molecules: the spectral analysis of the molecule was provided and the program defined a set of possible structures that it then compared with the data to determine the correct one. In 1972 then, together with two other researchers, he developed MYCIN, a system specialized in the diagnosis of infectious diseases.

Despite the countless discoveries of the early years, the excitement began to wane in the decade 1970-1980: the expectations and hopes that had been boasted by scientists turned out to be too rosy and ambitious. Precisely in the areas that seemed the simplest, such as machine translation or natural language reproduction, AI had the greatest difficulties. There were also complications related to limited computing power, difficulty and intractability of the problems, management of large amounts of data. For example, one of the first AI systems that analyzed the English language was able to manage a vocabulary of only 20 words, because it could not store more in its memory. In 1974, funders, realizing that researchers had promised too much without achieving the desired results, refused to continue supporting most AI projects.

The beginning of the eighties was the scene of a comeback of this branch, thanks to the great success of the XCON program, written by Professor John McDermott for the Digital Equipment Corporation: by helping hardware vendors not to make mistakes in placing very complicated orders (at that time people had to order every single component for their computer systems by themselves), it allowed the company to save about 40 million dollars a year. In the wake of this success, they returned to funding and investment: new knowledge-based systems and knowledge engineering were born, which created programs capable of playing chess at the same level as human players. 

Architettura di XCON
XCON Architecture

Once again, however, investments and confidence in AI turned out to be an economic bubble that showed its first signs of collapse in 1987, as many hardware companies specializing in computers dedicated to artificial intelligence collapsed. Expert systems such as XCON proved too expensive to maintain because they had to be updated manually (a difficult, costly and error-prone process) and could not handle unusual inputs well. New PCs from Apple and IBM proved to be more powerful than computers specifically designed to run artificial intelligence programs so a half-billion dollar industry disappeared overnight. Researchers had promised too much and gained too little, and so the study of artificial intelligence suffered a setback until the mid-1990s.

After the further failure, the study of artificial intelligence totally changed its approach: if before the focus was on research based on intuition and on examples created by art, now the attention was shifted towards the construction of models based on mathematical results well demonstrated and that enjoyed an extensive experimentation. This partial re-founding, coupled with the availability of faster and faster computers and new waves of funding, led to significant results: surprisingly, when funding was becoming scarce, AI, albeit under other names, began to be incorporated into thousands of successful systems: Google’s search engine, for example, was partially powered by artificial intelligence from the start.

In 1991, DARPA unveiled the DART system to optimize the logistics needed by the U.S. military during the Gulf War: after 4 years, the system had saved the military the same amount of money that DARPA had spent on AI research over the past 30 years combined. In 1997, IBM’s Deep Blue defeated the reigning world chess champion, Garry Kasparov, and in 2005, a Stanford robot won the DARPA Grand Challenge by autonomously driving 131 miles through a desert trail it had never seen before. None of these systems and computers, however, were explicitly called “artificial intelligence”: starting in the 1990s and continuing into the 2000s, researchers began referring to these inventions as “knowledge-based systems,” “cognitive systems,” and “computational intelligence.”

Kasparov VS Deep Blue
Garry Kasparov VS Deep Blue

Fast forward to the present: in 2012, AlexNet won the ImageNet Large Scale Visual Recognition Competition, an annual competition featuring the best computer vision algorithms from around the world. With an astonishing margin of victory, the program managed to capture the attention of researchers in many fields, marking a moment of great renaissance for the AI industry. The three key determinants of the University of Toronto victory are the pillars that are driving modern artificial intelligence: ever-increasing computational capacity, big data, and increasingly intelligent algorithms. This is where the exponential growth, from 2012 to the present, of advances in the AI field began.

In the last twenty years, technological progress has advanced at a frightening pace, allowing to reach unthinkable goals. What has allowed such an impressive development in such a short time are mainly two factors: on the scientific/technical side, the increase in power and computing capacity of computers (accompanied by the progressive miniaturization of their components) and in the social field, the possibility of having access to millions of data, personal and not, obtained through the use of technology by the global population. Thus, an unprecedented process of improvement of Artificial Intelligence has been unknowingly triggered, and the enormous availability of data, unavailable in the past, has been the fuel of this super-development. 

The collaboration between Microsoft, Delft Polytechnic, the Rembrandt House Museum in Amsterdam and the Mauritshuis gave birth to “The Next Rembrandt” project. Presented on April 19, 2016 at the gallery “Looiersgracht 60” in Amsterdam, it is a painting by Rembrandt that the author, however, never actually painted. “How would Rembrandt have painted his last painting?”, this seems to have been the question behind the project, and the answer is: an image produced by a 3d printer that reproduces a portrait of a 17th century man. It seems to have been made by the artist’s hand, but it was created thanks to the use of sophisticated technology, which made a computer create a painting whose peculiar characteristics were in every way similar to those of the originals; so much so as to make it look like an original that came out of the workshop of the great Dutch painter. The work was the result of 18 months of work in which man’s intervention was limited to creating the most appropriate algorithms for its realization, while the computer had the most difficult task: to analyze and capture the artist’s typifications (compositional geometries, pictorial materials, recurring patterns in the realization of faces in portraits, and the unmistakable trademark of the master of light and shadow), memorizing 168. 263 pictorial fragments taken from a corpus of over three hundred paintings made between 1632 and 1642, acquired with a high-precision scanner with over 500 hours of scanning and 150 GB of material. The result was a painting featuring the portrait of a man of about 30/40 years of Caucasian origin, with a thick beard and moustache, a black suit with a white collar and a hat, looking to the right. To recreate the three-dimensional effect of a painting, with the texture of the canvas and the thickness of the paint, the reliefs of an original painting were mapped. The resulting mapping served as a reference for the final print of “The Next Rembrandt,” which was generated with a 3D printer that uses UV varnishes, layered in different layers in order to recreate the feeling of a hand-painted painting.

The Next Rembrandt
The Next Rembrandt

In short, the project wanted to fuel the conversation about the relationship between art and algorithms, between data and human design, and between technology and emotion: the final piece is not a copy of Rembrandt’s work, nor is it necessarily what he would have painted if he had lived longer than he did, but a powerful demonstration of how data can be used to make life itself more beautiful. A tribute to the Master of Light and Shadow.

Not only that, artificial intelligence has also been used in the production of whisky: the Swedish distillery Mackmyra Whisky, founded in 1999 and winner of several international awards, together with the Finnish technology company Fourkind and Microsoft, created the first whisky in the world starting from the blend suggested by an artificial intelligence.

Master Distillers can spend their entire lives meticulously tasting, modifying and experimenting to create the best possible flavors, turning acts of chemistry into an art form. This is where Mackmyra called in AI, used to augment and automate the whiskey creation process. Currently, the distillery’s machine learning models, backed by Microsoft’s Azure cloud platform and cognitive services, are fed with Mackmyra’s existing recipes (including those for award-winning blends), sales data, and customer preferences. This allows the AI to generate 70 million recipes that it thinks might be popular: not only has the process been made faster, but the amount of data processed allows it to find combinations that probably would never have been considered. This solution, however, is not designed to replace a Master Blender since the sensory factor can never be replaced by any program.

Whisky AI Il primo whisky al mondo realizzato dall'intelligenza artificiale
The first whisky in the world made by an artificial intelligence.

AI, today, is in every aspect of our daily lives: our cell phone knows our preferences and tastes, unlocks via facial recognition, indicates which route to use, tells how long it will take us to get from one place to another, while self-driving vehicles have become a reality. Speech recognition, machine translation, scheduling and logistics, voice assistants, search engines: the omnipresence of AI increasingly leads to find it in every form of technology and innovation around us. This journey of more than 70 years has led us, today, to perform and repeat hundreds of routine gestures related to artificial intelligence processes, without hardly even paying attention anymore. Artificial intelligence is an opportunity for human empowerment and, although in some respects it can be “scary”, we must continue to be the ones to choose how to govern it so that it continues to represent a real opportunity for development.

To not miss the next journey to discover the technologies that are revolutionizing our daily life, subscribe to our newsletter here.

Share this article