On my first article, I discussed the top 5 technological breakthroughs which underpin the AV revolution. This article aims to provide a high-level view of the processing centre, the AI.
In recent years, Artificial Intelligence (AI) has become a core component in the automotive industry, specifically when it comes to the development of level-4 and level-5 autonomous vehicles. AI is not a new technology and it has been around since the 1950s, however, the automotive industry reliance has increased due to the sheer volume of data that we have at our disposal today. With the aid of connected devices and cloud services, we are able to collect data in every industry.
The term was founded by John McCarthy, a computer scientist in 1955, and it is defined as the ability of a computer program or machine to think, learn and make decisions. The data feeds these programs and machines with a massive amount of data, which in return is analysed and processed to enable the AI to ultimately think logically and perform human actions.
AI-based systems has become standard in new vehicles designs and is an integrated part of any new vehicle rolling off the production line especially when it comes to, (1) infotainment which encompasses speech recognition, gesture recognition, eye tracking, natural language interfaces and, (2) Advanced Driver Assistance Systems (ADAS) and autonomous vehicles, which includes camera-based machine vision systems, radar-based detection units, just to name a few, have core dependency on the AI.
We are building autonomous vehicles that drive themselves, however we are enforcing them to drive like human drivers do. This can only be achieved when we provide these vehicles with the sensory functions, reasoning functions and give them executive capabilities that humans use to drive vehicles. A recent study has suggested by 2020, over 250 million cars will be connected with each other, and autonomous vehicles are being fitted with cameras, sensors and communication systems to enable the vehicle to generate massive amounts of data which, when applied with AI, enables the vehicle to see, hear, think and make decisions just like human drivers do. The incoming data creates a repetitive loop, commonly known as the Perception-Action Cycle. The data feeds it into the AI, who in turn makes decisions and enables the autonomous vehicle to perform specific actions. The more the number of Perception-Action Cycles take place, that much more intelligent the AI becomes, resulting in a higher accuracy of making decisions, especially in complex driving situations.
This process breaks down into three main components:
Autonomous vehicles are fitted with numerous sensors, radars and cameras to generate data. All of these forms the Digital consortium, which enables the autonomous vehicle to see, hear and feel the road, other vehicles and every other object on/near the road, just like a human driver would pay attention to the road while driving. This data is then processed with super-computers and data communication systems are used to securely communicate valuable information (input) to the autonomous driving cloud platform.
It acts as the regulator or simply the brain of the autonomous vehicle and is also connected to a database which acts as a memory. The combination of data stored from past driving experiences along with the real-time input coming in through the autonomous vehicle and the immediate environment around it helps the AI to make accurate driving decisions.
Based on the decisions the autonomous vehicle is able to detect objects on the road, manoeuvre through the traffic without human intervention and gets to the destination safely. Autonomous vehicles are also being equipped with AI-based functional systems such as voice and speech recognition, gesture controls, eye tracking and other driving monitoring systems. These functions are also carried out based on the decisions made by the AI in the Autonomous Driving Platform.