In Search of Thinking Machines: From the 17th to the 21st Century

From idea to action

Since the beginning of the Scientific Revolution in the 17th century, there has been an unwavering fascination with the idea of ​​creating machines capable of thinking and acting like humans.

This desire has been a common thread throughout the history of science and technology, and its exploration has led to significant developments in modern artificial intelligence (AI).

The roots of this knowledge are found in philosophy and mechanistic thinking, where philosophers such as René Descartes speculated on the possibility of treating humans as rational machines, laying the foundations for the concept of thinking machines. While Descartes is best known for his famous phrase “Cogito, ergo sum” (I think, therefore I am), he also addressed the question of the relationship between mind and body. Although he did not directly propose the idea of ​​intelligent machines, he laid the philosophical foundation for future debates. He was a philosopher-mathematician who raised the notion of machine animals in his Discourse on Method (1637), which suggests that animals can be understood as complex machines that operate without consciousness with automated behavior in relation to his mechanistic vision.

Thomas Hobbes, an English philosopher considered one of the founders of modern political philosophy, proposed the possibility of building machines capable of performing tasks that normally required human intelligence. He briefly touched on the idea of ​​machines in his Leviathan (1651) in the chapter “De corpore”, where he suggests the possibility that human beings were capable of constructing machines that could perform tasks that normally required human intelligence from a mechanistic perspective. also. . Hobbes’ ideas are considered relevant because he proposed that thought and rationality can be understood and replicated through mechanical processes. Although he did not develop the idea in detail, he laid the groundwork for future research in AI and understanding the relationship between minds and machines.

However, it was not until the third industrial revolution, which began in the 1950s, with the development of microelectronics, that the idea of ​​thinking machines became a reality. Alan Turing, one of the fathers of computer science, formalized the concept of the “universal machine” by establishing the foundations of computing theory and laying the groundwork for creating machines that could mimic human mental processes. In his article Computing Machinery and Intelligence (1950) the following question was asked: can machines think?

In the decades since Turing, there has been steady progress in AI, with developments focused on solving complex problems, natural language processing and decision-making, all inspired by the idea of ​​replicating the human mind.

The term “artificial intelligence” was first used at the Dartmouth Conference in 1956 (McCarthy, Minsky, Rochester, Shannon, 1955), to describe the science and technology of creating intelligent machines. Considered the father of artificial intelligence, Marvin Minsky excelled in his work on machine learning in systems that integrate robotics and language to make seemingly autonomous “intelligence-requiring” decisions.

The first shoe in modern history

In the 1960s and 1970s, AI began to focus on areas such as problem solving and natural language processing. Systems were developed that used knowledge and rules to make automated decisions.

The first conversational bot in history appeared, the computer program ELIZA, designed at the Massachusetts Institute of Technology (MIT) between 1964 and 1966 by Professor Joseph Weizenbaum. It worked by searching for keywords in a phrase written by the user and matching with a sample phrase registered in its database. In this sense, the machine seemed to have a logical and continuous conversation with the patient (it was programmed to interact with people who needed psychological help and to show the superficiality between man and machine). He did this by recognizing key words and asking questions about them as if he were a psychologist. For example, if someone mentioned mother in a sentence, the bot would automatically ask them to tell it more about their family. In this way, the illusion of understanding and real interaction was created. The limitation was that I could not learn from their conversations.

In the 1980s and 1990s, AI became a popular topic in Western culture thanks to films such as The Terminator (1984) and The Matrix (1999), but more importantly, the technology began to be applied in fields such as medicine and the automotive industry. .

In 1996, the Deep Blue computer developed by the technology company IBM defeated the Russian chess player Gari Kasparov, at the time considered one of the best in the world, in a game of chess. In parallel, the growth of the Internet in the 1990s provided vast data sets that would be crucial for AI training.

The early 2000s saw improvements in natural language processing (NLP) in terms of text and language processing. Machine learning has become a central tool in AI with increasingly sophisticated algorithms.

At the end of the first decade of the 21st century, the emergence of “Big Data” provided even more resources for training AI-based systems. These technologies have led to significant advances in computer vision and speech recognition. Siri, Alexa and Cortana have become great personal virtual assistants.

In the mid-2010s, the race for autonomous vehicles began with companies like Tesla and Waymo. At the beginning of the second decade (2011), the generalization of artificial intelligence begins to gain relevance, especially in sectors such as health, finance and e-commerce, but it is also the beginning of discussions and debates about the ethical impact, privacy and security of AI.

The era of language models

At the beginning of 2020, we faced the COVID-19 pandemic, which would change the way we engage with our environment on a global level, but it was also a time when companies like Open AI took advantage of the global lockdown situation to launch GPT-3. on the market and thus began the era of large-scale language models in text generation, learning and understanding. In the medical sector, advances in diagnostics and personalization of medical care are a reality.

Now in 2023 and beyond, AI is found in a wide variety of applications and services, from content creation and virtual assistants to business decision-making and autonomous driving systems.

AI is a multidisciplinary field of study that seeks to develop systems and algorithms capable of imitating human intelligence in machines. For computer science experts Stuart Russell and Peter Norvig, artificial intelligence is a combination of algorithms designed to create machines that have the same capabilities as human beings. (Russell and Norvig, 2009). In this sense, it combines the characteristics of human intelligence such as learning, recognition, natural language processing, decision making and problem solving.

Artificial intelligence works with knowledge for which it requires ways to structure it through a notation that is precise enough for the system to use. In computational terms, reasoning is a general process that a system performs to behave rationally based on the knowledge it has of its environment, and in this way begins to perform its own learning, resulting in: supervised learning (based on image recognition in previously labeled images ), unsupervised learning (anomaly detection itself), and deep neural networks, which are used for speech recognition, machine translation, and medical diagnosis. (Zhai, Oliver, Kolesnikov, and L. Beyer, 2019)

In the 21st century, artificial intelligence has seen a resurgence thanks to advances in machine learning and data mining. As we move forward, finding machines that think and act like humans remains an ambitious and challenging task; Of course, ethical challenges such as privacy and control raise important questions around this search.

The type of AI we know so far is called Generative Intelligence, that is, it generates knowledge through inputs (human inputs) fed by big data (tera millions of pieces of data). The debate now and in the not-too-distant future focuses on knowing and exploring the ethical and human implications that will be achieved when this intelligence is transformed into an Artificial General Intelligence (AGI), with human-like capabilities. autonomous decision-making, but also consciousness.


Interesting references

Descartes, René (1637), Discourse on Method, Alianza Editorial, First Edition, 2011.

Future of Life, Pause Giant AI Experiments: An Open Letter (March 2023), https://futureoflife.org/open-letter/pause-giant-ai-experiments

Hobbes Thomas (1651), Leviathan, Penguin Classics, 2017.

McCarthy, John, Minsky Marvin L., Rochester Nathaniel, Shannon Claude E., “Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, AI Magazine, Vol. 27, No. 4, Winter 2006. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf

Russel Stuart and Norvig Peter, (2009) Artificial Intelligence – A Modern Approach, Prentice Hall. Artificial Intelligence Series, Englewood Cliffs, NJ. and Cambridge University Press: 7 Jul 2009. https://www.cambridge.org/core/journals/knowledge-engineeringreview/article/abs/artificial-intelligencea-modern-approach-by-stuart-russell-andpeter-norvig-prentice-hall-series-in-artificial- intelligence-englewood-cliffsnj/65AD9B9C5853AE2595E99E26800C30CE

Turing, AM. (1950) Computing Machinery and Intelligence, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433

Turing, AM; Girard, JI (1995). “The Turing Machine”. Paris: Eds. du Seuil,

Leave a Comment

123movies 123 movies thesoap2day soaptoday 0123movie 123movies 123 movies