THE HAGUE, 3 October 2024. What originated as a sci-fi fantasy has now become a staple in our everyday lives - from making grocery lists to solving climate change. But how did we end up here?
ANCIENT ORIGINS?
Believe it or not, the idea of artificial intelligence dates back millenia, such as the bronze figure Talos who guarded the island of Crete and the many automatons that Hephaestus forged in his workshop. These figures show how humans have long dreamt of building intelligent machines, although they probably never foresaw the rise of super-smart chatbots or robot vacuums.
RENAISSANCE OF RATIONALE
As humanity evolved and more efficient infrastructure embedded itself in society through the agricultural revolution, humans had time to delve into more refined philosophical ideas - what makes a human, human? Chinese, Indian and Greek philosophers all tried to examine the basics of formal reasoning and decision-making as far back as the first millenium BCE. The renaissance gave way to this movement, aptly named Humanism. During this period, the lines between being a psychologist, philosopher and mathematician were not just blurred, but superimposed. These omnicompetent scientists suggested that all rational thought could be translated into something as systematic as algebra or geometry.
The study of mathematical reasoning provided the essential breakthrough towards modern computing. Famous scholar Leibniz invented the binary system, stating that all processes of mathematical deduction could be expressed in 0's and 1's. The importance of this breakthrough is more than considerable, as most modern computers use binary encoding to this day. Talk about ahead of your time - Leibniz' death was in 1716, and the first binary computer was built in 1939!
MATH AND MACHINES
The industrial revolution gave way to the rise of computational capabilities and implementing the use of machines into everyday life. The precursors of computers were made to perform calculations in the 19th century, but what truly launched us into the era of technology was World War II. Alan Turing, who many consider to be the father of computer science, invented a life-saving computer-like machine that was able to "think" and crack military codes. After the war, cemented in his new position in England's National Physical Laboratory, Turing published a landmark paper "Computing Machinery and Intelligence"in which he devised the famous Turing Test: If a human is unable to distinguish between a conversation with a machine and a conversation with another human, it is plausible to say that the machine is "thinking". Sounds similar to our AI today!
THE DARTMOUTH DECISION
In 1956, a conference held at Dartmouth College coined the phrase "Artificial Intelligence". This led to what is referred to as the "cognitive revolution". George Miller wrote after one symposium "I left with a conviction...that experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes were all pieces of a larger whole". This quote embodies the paradigm shift which occurred during this era, that fields which had previously seemed irrelevant to each other were intricately connected, echoing the same universalist mindset which scholars had in the medieval renaissance.
A BRAIN GAME
The rebirth of academia in the 19th century led to discoveries in every field of study, one being neurology. The discovery of neural pathways proved to be paramount in advancing artificial intelligence, and machines doted "neural networks" were able to carry out tasks and conversations that were indeed indistinguishable from humans. The general public had increasing optimism towards AI and its capabilities, an optimism that unfortunately fizzled out due to overzealous publishers exaggerating the timeline in which AI would actually be evident in our day-to-day lives. The largest limitation was actually physical space - a simple algebra solving computer took up a whole room. The first "LLM"s success was demonstrating using a vocabulary of just twenty words! Growth in artificial intelligence was stifled by the lack of computational power, resulting in an "AI Winter" that lasted from the mid 1970s until the mid 1990s.
IF...THEN...CHESS CHAMPION!
According to Moore's law, the speed and memory capacity for a computer doubles every two years, and this exponential growth granted a rebirth for AI in the 90s. Sophisticated mathematic and game theory was applied to AI models, who then used predictive power and "if, then"statements to come to a logical, and most importantly human, decision. On May 11, 1997, Deep Blue became the first computer chess playing system to beat the reigning world chess champion with standard time constraints. The claim of exponential growth in computational power was substantiated by this event - Deep Blue was 10 million times faster than the first chess playing system. Coupled with the invention of the internet, AI was about to make leaps and bounds, far greater than winning a chess match.
THE 2000s TO TODAY
The most recent revolution in AI came with deep learning. Remember those neural networks discussed not so long ago? Well, in the 2010s, machine learning was able to use neural networks with many layers - hence the word "deep". These networks were able to process extensive amounts of data resulting in technology such as facial recognition and autonomous driving. The most distinguishing development that proved AI worked was AlphaGo - the machine learning model which could beat the world's hardest game in 2016, a game that at the time humans had a difficult time playing! AlphaGo was able to play against itself and learn, a skill which Deep Blue was unable to do. Starting from Alexa and Siri to take screenshots and turn off our lights, now we use ChatGPT to help us write papers and solve coding problems. Concerns about the ethics of artificial intelligence and competing agendas have resulted in the creation of AI companies such as Google's DeepMind and OpenAI. In today's world, there is now a consistent dialogue surrounding the notion of creating "super intelligence", AI which surpasses the intelligence of our brightest human minds. Let us see where the journey of AI takes us!
SOURCES
Miller G (2003). "The cognitive revolution: a historical perspective" (PDF). Trends in Cognitive Sciences. 7 (3): 141–144. doi:10.1016/s1364-6613(03)00029-9. PMID 12639696.
Copeland J (1999). "A Brief History of Computing". AlanTuring.net.
Nilsson NJ (1984). "The SRI Artificial Intelligence Center: A Brief History" (PDF). Artificial Intelligence Center, SRI International. Archived from the original (PDF) on 10 August 2022. Schaeffer J (1997). One Jump Ahead:: Challenging Human Supremacy in Checkers. Springer.
Schmidhuber J (2022). "Annotated History of Modern AI and Deep Learning". Buchanan BG (Winter 2005), "A (Very) Brief History of Artificial Intelligence" (PDF), AI Magazine, pp. 53–60, archived from the original (PDF) on 26 September 2007, retrieved 30 August2007
Saygin AP, Cicekli I, Akman V (2000), "Turing Test: 50 Years Later" (PDF), Minds and Machines, 10 (4): 463–518, doi:10.1023/A:1011288000451, hdl:11693/24987, S2CID 990084, archived from the original (PDF) on 9 April 2011, retrieved 7 January 2004. Reprinted in Moor (2003, pp. 23–78).
Moor J, ed. (2003), The Turing Test: The Elusive Standard of Artificial Intelligence, Dordrecht: Kluwer Academic Publishers, ISBNÂ 978-1-4020-1205-1 Minsky M (1974), A Framework for Representing Knowledge, archived from the original on 7 January 2021, retrieved 16 October 2008
McCarthy J, Hayes PJ (1969), "Some philosophical problems from the standpoint of artificial intelligence", in Meltzer BJ, Mitchie D (eds.), Machine Intelligence 4, Edinburgh University Press, pp. 463–502, retrieved 16 October 2008
Levitt GM (2000), The Turk, Chess Automaton, Jefferson, N.C.: McFarland, ISBNÂ 978-0-7864-0778-1.
All images sourced from public domain.