Usually, artificial intelligence (AI) is understood as an intelligent system capable of performing creative functions that are traditionally considered the prerogative of a person. In general, AI systems can be divided into three groups: limited or narrow AI, general/strong/broad AI mentioned above, and superintelligent AI. Are all AI the same? History of AI development ARE AI ALL THE SAME? IBM's Deep Blue program, which beat Garry Kasparov at chess in 1996, or Google's DeepMind AlphaGo program, which beat world Go champion Lee Sedol in 2016, are examples of limited artificial intelligence capable of solving one specific problem. This is its main difference from artificial general intelligence (AGI), which is on par with human intelligence and can perform many different tasks. He presents an approach to artificial intelligence that recognizes that technology will always be just an imitation of human cognition, capable of acting according to given rules, but never outside of them. Weak artificial intelligence can act according to the rules, but at the same time is bound by them and does not have truly human cognitive capabilities. At this stage in the development of artificial intelligence, strong artificial intelligence is more of a philosophy than a practical approach to technology. Strong artificial intelligence, also known as total artificial intelligence, is a construct that mimics the human brain. Philosophically, strong artificial intelligence makes no distinction between software and artificial intelligence, accurately mimicking the human brain and therefore the actions of the person himself. The philosophy is that a computer can be programmed to replicate all the characteristics of the human brain as we understand it, with mental and cognitive abilities currently considered to be exclusively human. But since we still do not fully understand what human intelligence is and how it develops, the guidelines for developing strong artificial intelligence are still not clear. Superintelligent artificial intelligence is one step higher than human. Nick Bostrom describes it as follows: it is "an intelligence that is much smarter than the best human brain in almost all areas, including scientific creativity, general wisdom and social skills." In other words, this is when machines become much smarter than us. As another way of classifying AI, a recent PwC report outlined the distinction between "assisted intelligence", "augmented intelligence" and "autonomous intelligence". Assistive intelligence is represented by GPS navigation programs that work in cars. Augmented intelligence "enables people and organizations to do things they couldn't otherwise." Autonomous intelligence "allows machines to act on their own", such as in the case of self-driving cars. A key characteristic of artificial intelligence is that it gives computers the ability to learn. To do this, it is necessary to find ways to give computers, with their binary logic, the ability to mimic human thinking, which is more abstract and reinforced by the ability to learn and adapt. This field covers not only computer programming, but also linguistics, biology, mathematics, engineering, and psychology. Isaac Asimov “We use the term "artificial intelligence" to refer to programs that are not just coded, but can be trained. Basically making computers think more intuitively by analyzing data and making predictions,” says David Parmenter, head of data science at Adobe. “A good example of artificial intelligence used by almost everyone is spam detection. Does everyone have a spam filter or some sort of scam detection tool? This is artificial intelligence,” says Parmenter. Popular descriptions of artificial intelligence, especially in movies and TV shows, focus mostly on sentient robots, humanoid servants, and out-of-control smart refrigerators. In reality, artificial intelligence has less to do with humanoids, but affects a wide range of industries and scientific disciplines. Artificial intelligence combines Big Data, computational resources and specially designed algorithms to teach programs to learn and adapt depending on the content of the data - patterns, aberrations, special information. With new information comes new opportunities through advancements such as natural language processing and machine learning. This information, namely its arrays, plays a key role. Computers processdata, recognize patterns in them and perform various actions with the information received. At this stage, artificial intelligence does not have a computerized consciousness. At some point, human intervention in software algorithms, searching in data, or issuing instructions to the machine in some other way is required. But as the technologies that form the foundation of artificial intelligence develop, such programmable properties as knowledge, reasoning, learning, and problem solving are constantly improving. HISTORY OF AI DEVELOPMENT It may seem that artificial intelligence appeared out of nowhere in the last few years, but in fact, the ideas and technologies that underlie modern achievements are almost 100 years old. The word "robot" was first used in English almost a century ago, when Karel Čapek's play "Rossum's Universal Robots" was staged in London, and in 1945 the term "robotics" was first used by Isaac Asimov. The term “artificial intelligence” has been in use for decades, since 1956. It was coined by McCarthy, who in the same decade created the LISP programming language for artificial intelligence. At first, work on artificial intelligence was aimed at solving problems and was represented by the first works in the field of neural networks, the foundations of which were laid in 1943. Alan Turing The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation—the theory of algorithms—was born, and the first computers were created. The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question arose in the scientific community: what are the limits of the capabilities of computers and will machines reach the level of human development? In 1950, one of the pioneers in the field of computer technology, the English scientist Alan Turing, writes an article entitled “Can a machine think?” In which he describes a procedure by which it will be possible to determine the moment when a machine becomes equal in terms of intelligence with a person called the Turing test. In the 1960s The US Department of Defense has begun work on artificial intelligence. This work continues to this day and contributes a lot to progress. Developments in machine learning, an important part of artificial intelligence, began in the 1980s. In 1964, a dissertation was published showing that computers can understand natural language sufficiently to solve algebraic problems. Also in the 1960s, the world saw the interactive program ELIZA and the Shakey robot solving problems. The first computer-controlled self-driving car appeared in 1979, and by 1990 the first achievements in artificial intelligence to be proud of had been achieved: machine learning demonstrations, data analysis, developments in natural languages, and virtual reality. In the USSR, work in the field of artificial intelligence began in the 1960s. A number of pioneering studies were carried out at Moscow University and the Academy of Sciences, headed by Veniamin Pushkin and D. A. Pospelov. Since the early 1960s, M. L. Tsetlin and colleagues have been developing issues related to the training of finite automata. In 1964, the work of the Leningrad logician Sergei Maslov "An inverse method for establishing derivability in the classical predicate calculus" was published, in which for the first time a method was proposed for automatically searching for proofs of theorems in the predicate calculus. In Russia in 2019, at a meeting on the development of the digital economy, it was decided to prepare a national strategy for artificial intelligence. Within its framework, there is a federal program with the allocation of 90 billion rubles.