In recent years, the IT market has seen a real boom in the field of solutions based on artificial intelligence. And this is not surprising: modern computational and neural network technologies have reached a level that allows AI systems to solve practical problems that are very difficult for humans, and developers can create innovative applications and services that demonstrate the boundless potential of electronic intelligence.
⇡ #Sense of smell
One of the clearest examples of the intensive development of artificial intelligence technologies has become an AI-complex created by specialists from Intel Labs and Cornell University, capable of distinguishing odors and imitating the work of the human olfactory nervous system. The development is based on Intel Loihi neuromorphic processors, combining learning, training and decision-making processes in one chip and allowing the system to be autonomous and “smart” without being connected to the cloud (to the database). In the course of experiments, the complex, designed and equipped with chemical sensors, demonstrated high efficiency in recognizing odors of hazardous substances in the air, even in conditions of strong interference. Intel is confident that such solutions will help in the development of robotics, when robots will be able to sort products themselves, focusing on smell, will push the development of environmental monitoring systems, will lead to an increase in labor safety in production and, in general, will give an impetus to the development of cognitive abilities of silicon processors.
⇡ #Diseases will recede
Developers of AI systems have made significant progress in the medical field in 2020. For example, DeepMind, owned by the Alphabet (Google) holding company, announced a significant breakthrough in predicting protein folding (folding). The problem of predicting protein folding is considered one of the 125 most important problems for solving modern problems, as well as one of the greatest problems in biology over the past 50 years. The fact is that proteins are assembled from linear sequences of amino acids, which after synthesis take on a unique spatial form, and there are a huge variety of such forms. At the moment, out of hundreds of millions of proteins (combinations of amino acids), only 0.1% of compounds have been studied, whose spatial structure is also well known. Unknown proteins, as well as compounds whose properties have not yet been confirmed experimentally, scientists are trying to predict using computers. But until now, no one could calculate with a sufficient degree of accuracy what 3D-form a protein will take from a given set and sequences of amino acids. DeepMind claims to have found the key to solving this problem. If this is true, then we can expect a breakthrough in the discovery of new drugs and vaccines, as well as in understanding the occurrence and course of many diseases.
According to experts, the widespread introduction of artificial intelligence technologies into medicine promises great prospects, allowing to significantly improve the quality of treatment and fundamentally change the approach to early diagnosis of dangerous diseases. A similar position is adhered to by the notorious Bill Gates, who believes that the use of neural networks in complex biological systems can radically change the lives of people in the future. The potential of AI is only now starting to unfold, according to the founder of Microsoft, – computing power in this area doubles every 3.5 months. Mr. Gates pointed out that, along with improvements in data processing, this makes it possible to synthesize, analyze, identify patterns and make predictions in many more directions than a person can realize.
⇡ #Power of thought
Mind reading is still the domain of science fiction films and books. However, science and technology do not stand still, and there is every reason to believe that in the future this kind of technology will become a reality. A group of scientists from the University of California at San Francisco managed to move a step forward in this direction, experimentally proving the possibility of recognizing nerve signals in the human brain and translating them into intelligible words using a recurrent neural network and electrodes implanted in the brain. The experiment involved patients with epilepsy, whose electrodes were implanted to combat neurological disease and track seizures. It so happened that some of the electrodes ended up in areas of the brain, in which words are selected, expressions are composed and feedback is carried out with the parts of the brain that perceive a person’s own speech. The subjects were asked mentally and then aloud to pronounce several sentences with a limited set of words. At the same time, signals were taken from sensors implanted in the brain. The obtained data was transferred to the neural network for training, and the intermediate result was given for the analysis of another AI network. The probability of misidentifying words was only 3 percent. An impressive figure!
⇡ #From classic to rock
An interesting application of artificial intelligence was found by the programmers of the OpenAI company, who developed the Jukebox, an artificial intelligence that composes music with meaningful lyrics and vocals. To train the neural network of the system, many excerpts from songs of various genres were used, from rock, jazz and blues to hip-hop with country music and classical works. This approach allowed the OpenAI team to expand the project’s capabilities and achieve the effect of imitating the musical compositions of the performers on whose tracks he studied. For example, the Jukebox can compose music in the style of country singer Johnny Cash, rapper Drake and even the Russian pop group Tatu. It takes artificial intelligence about 9 hours and a huge amount of computing resources to create one minute of a music track with vocal parts. It is for this reason that the company cannot provide open access to its AI system. But the developers have published the results of the Jukebox. You can listen to them at jukebox.openai.com.
⇡ #Drawing lessons
The electronic mind has found applicationand in the visual arts. Exactly in mid-2020, it became known about the creation of the Timecraft machine learning system by the specialists of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, which allows you to recreate the process of painting and applying strokes for paintings by famous artists, be it Monet, Vincent Van Gogh or Salvador Dali. It is reported that the neural network was first trained on two hundred videos with accelerated filming of the technique of writing real digital and watercolor paintings. The researchers then created a convolutional neural network that is designed to “deconstruct” artwork based on their knowledge of the painting process. As a result, the Timecraft system was able to show higher efficiency than existing similar projects in more than 90% of cases. Not a bad result. In addition to the lessons of virtual history, the Timecraft AI system can be useful for illustrating general techniques and drawing techniques for beginners.
The Yandex company also managed to be noted in the creative AI segment, which opened a virtual gallery of neural network art, which presents four thousand unique paintings created by artificial intelligence. The gallery is located at yandex.ru/lab/ganart and is divided into four thematic halls: People, Nature, City, and Mood. To train the neural network, Yandex specialists used works belonging to different areas of painting: from Fauvism and Cubism to minimalism and street art. In the process of training, the AI-system studied 40 thousand paintings, after which she began to create her own works. To select pictures in different categories, we used another neural network, which is used in the Yandex.Pictures service to search for images by queries. It was she who was able to see people, nature, city and different moods in the paintings, sorting the available works by category.
⇡ #Mind games
Artificial intelligence has found many applications in other areas of human activity. For example, NVIDIA has used AI to recreate the gameplay of the famous arcade video game Pac-Man using the GameGAN neural network. It took artificial intelligence only 4 days to solve this problem. The company trained the neural network using 50 thousand game sessions in Pac-Man. Then she was given the task of recreating the entire game she saw, starting from static walls and dots and ending with moving ghosts and Pacman himself. Training and recreation of the game was carried out using a quartet of NVIDIA Quadro GP100 graphics accelerators. The most interesting thing is that GameGAN did not provide access to the original game code or its engine. All training boiled down to the fact that one neural network watched how another neural network played in Pac-Man. “To create a game like Pac-Man, a programmer needs to come up with and write down rules for the behavior and interaction of all available agents within the game. This is a very painstaking job. GameGAN can make this easier. A neural network is capable of learning new rules through observation. Ideally, algorithms like GameGAN could be trained to generate procedural rules for the game you want to create, “NVIDIA researchers explain, stressing that in the future, their development can be used not only in the gaming industry, but also in other areas.
⇡ #Injected by robots
An interesting development in the field of artificial intelligence was noted by Mail.ru Group, which introduced in 2020 the dictor.mail.ru platform, which allows you to create news and reportage videos of studio quality in a few clicks. To create a video, it is enough to upload the text of the news to the system – and the virtual presenter will read it out. Announcers look and speak like real people: when reading news, they realistically reproduce facial expressions, react emotionally and place semantic accents. The user chooses the appearance of the announcer: the company has created several models of digital presenters, the prototypes for which were real people. Mail.ru Group emphasizes that machine learning methods were used to create virtual presenters. The speech synthesis is based on the developments of the voice assistant Marusya. And the video image is synchronized with the speech in real time using the Vision computer vision system from the same Mail.ru Group, trained on real prototypes and video recordings.
⇡ #A look into the future
The past 2020 can be safely called the year of the brightest achievements of artificial intelligence, which will continue its intensive development, despite the coronavirus pandemic and the difficult economic situation in the world. According to analysts at the International Data Corporation (IDC), in the past year, global costs in this area amounted to approximately $ 156.5 billion. By 2024, the market volume will double and exceed $ 300 billion. At the same time, software will remain the largest segment in the industry. In second place in terms of costs will be various AI services. The rest will come from hardware solutions. In the distant future, artificial intelligence will affect almost all areas of human activity. The AI market has a large reserve for the future and good prospects for development, and therefore there is every reason to believe that the dynamics of its growth will be higher than the expectations and forecasts of experts.
If you notice an error, select it with the mouse and press CTRL + ENTER.