Building sites of autonomous cars


Researchers in the US have highlighted a major weakness of deep learning algorithms used in autonomous cars. Not necessarily noted by the drivers, some subtle modifications on the panels can induce the algorithms in error. Potentially catastrophic errors.
Neuronal algorithms, strength and weakness of the autonomous car

One of the major building sites of autonomous cars is the recognition of its environment, vehicles, pedestrian street furniture, and especially road signs. If manufacturers and companies that develop recognition systems do everything to ensure that they are as accurate as possible, others seek (for the good cause) to trap systems by complex means. Or, as here, means so childish that it is worrying.

Eight researchers from four American universities have combined their efforts to deceive deep learning algorithms of autonomous vehicles. Indeed, at present, neural networks are used which learn by experience to recognize the panels. For this purpose, the algorithm is given a base of classified images representing different panels at different angles, light, wear, etc. Once the learning is done, the algorithm is supposed to be able to manage in all situations, even those to which it was not directly confronted during the learning.
Slight changes, unfortunate consequences

For the experiment, the researchers took a “classical” classification algorithm and moved the learning phase on a limited basis of only 17 panels. Each panel is present in the learning database between 92 and 500 times for a total of nearly 4600 images of panels.

The researchers have planned several “attacks” to deceive the recognition algorithm. The first is to reproduce a panel (Stop in this case) by applying subtle modifications (of colors for example). This false sign is totally readable by a human eye that will no doubt notice the variations of colors but will read STOP well. However, the algorithm tested systematically crashes and sees a limitation board at 45 mph. Worse, the algorithm is totally self-confident, more than 70% in virtually all cases. The car would certainly not stop.

The second attack is substantially the same. But here, a “turn right” sign is partially covered. Only the arrow is covered with a degraded checkerboard. The gradient is made in such a way that the 90 ° angle of the arrow is disturbed. The rest of the panel is not altered and it is totally recognizable … for a human. On the other hand, for the algorithm, it is transformed into a panel STOP or in “added line”. If the “line added” panel is understood (it is a yellow sign with two arrows instead of one) because of the disturbing gradient, the STOP panel seems rather unclear. However, the algorithm is still pretty sure of it (between 40 and 50%).
Simple stickers

The first two attacks assume that there is a will to harm (or that the panels are really degraded!). But, the two other attacks are, unfortunately, concrete cases that one meets in reality. Indeed, the third attack consists of affixing stickers (here LOVE and HATE) that disrupt the reading of the panel. For a human, the main message “STOP” is not altered. But, again, the algorithm is mistaken beyond a certain distance. At close range, it recognizes the panel STOP but with a rather weak confidence, or a hesitation between two possibilities.

The last attack is even more common since it involves affixing black or white stickers on the panels. These stickers of advertising or “art” come to blur the reading of the algorithm. The latter is mistaken in 100% of the cases, recognizing a limitation to 45. It should be noted, however, that the second hypothesis is often the STOP panel.
Improvements for autonomous cars

Obviously, the goal of the researchers is not to demonstrate that the autonomous vehicle is dangerous. They seek to point out possible weaknesses to improve it. They intend to carry out other tests with new disturbances that one can meet every day.

Leave a Reply

Your email address will not be published. Required fields are marked *