AI: From rumor to wide adoption
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumours of artificial beings endowed with intelligence or consciousness by master craftsmen. The modern history of AI started with a workshop at Dartmouth College in 1956, where participants including Marvin Minsky, John McCarthy and Herbert Simon agreed that AI was possible and laid out several research directions.
In the concept phase
In the late 1950s and early 1960s, AI research was dominated by a few key figures, including McCarthy, Minsky, Simon and Newell and Simon. McCarthy coined the term “artificial intelligence” and developed the Lisp programming language specifically for AI applications. Minsky’s work on neural networks was heavily influenced by McCarthy’s ideas, and the two collaborated on the seminal book Perceptrons (Minsky & McCarthy, 1969). Simon’s work on problem solving laid the foundations for the AI subfield of heuristics, and his team developed the General Problem Solver (Newell & Simon, 1961).
Not just theory
The 1970s saw a lull in AI research, as the field failed to live up to the expectations of the early researchers. This “AI winter” was ended by a number of important events, including the publication of expert systems research by Artificial Intelligence Laboratories at Stanford University and Edinburgh University, and the development of the robotic arm by Japanese engineer Waseda.
In the 1980s, AI research was revived by the commercial success of expert systems and the rise of the Japanese Fifth Generation Computer Systems project. The expert systems research of the 1970s had shown that it was possible to encode human expertise in computer programs, and commercial expert systems such as MYCIN (Feigenbaum & McCorduck, 1983) and XCON (Waterman, 1986) were very successful. The Fifth Generation project, launched in 1982, aimed to create computers that could perform human-like reasoning. The project led to the development of several important AI technologies, including Prolog and constraint satisfaction algorithms.
The 1990s saw a significant increase in the popularity of AI, as commercial applications such as web search engines and automatic language translation became widely available. The success of these applications was due in part to the development of new AI techniques, such as statistical machine translation and latent semantic analysis.
The late 1990s and early 2000s also saw the development of a number of important AI-based technologies that are now widely used in SEO, including web crawlers, link analysis algorithms and text classification algorithms.
AI in SEO?
The history of artificial intelligence is long and complex, but the field has made great strides in recent years. The latest AI technologies are being used to solve a variety of problems. For Noa Connect, this related specifically to SEO challenges involving content scaling & algorithm prediction. The future looks bright for further advances in the field.
Leave a Reply