With image recognition technology, it's still incredibly simple to deceive the systems. And although it's amusing when a neural network misinterprets a flower for a dishwasher, the ramifications of this foolishness are downright terrifying when you consider the repercussions of deploying these defective systems in reality.
In an article published in Science, researchers from the University of California, Berkeley, the University of Washington, and the University of Chicago demonstrated the limitations of neural networks when it comes to accurately detect an image. They focused on "natural adversarial instances," which are examples found in nature that trick a machine learning algorithm into incorrectly identifying a particular object.
The researchers compiled 7,500 natural adversarial cases into a data repository IMAGENET-A. There are billions of user-labeled wildlife photographs on the site iNaturalist and items tagged by users on Flickr. The pictures used for the database were drawn from these sources. They downloaded photographs from the collection, removed those that were accurately categorized by a second machine learning algorithm, and then manually chose high-quality files from the rest.
In their study, the researchers illustrated this mechanism in action by dissecting a photograph of a dragonfly. Dragonfly pictures were acquired from iNaturalist and reduced to 8,925. These 80 photos were hand-picked from a pool of 1,452 generated via an "algorithmically proposed selection."
Numerous factors resulted in thousands of photographs being excluded from the final database, none of which were deliberate attempts to misclassify images. Several factors caused the neural nets to misinterpret images, such as bad weather, varying framing of a shot, an item being partially obscured, and so on. Researchers discovered that processors tended to overgeneralize, over extrapolate, and wrongly incorporate tangential variables in their classifications.
It was for this reason that the machine learning algorithm correctly identified a candle as being a jack-o-lantern, although the image did not include any carved pumpkins. Dragonflies are classed as bananas because of a yellow spade that was nearby, according to experts. That's why, when an alligator's movement frame was slightly adjusted, the network identified cliff, leopard, and fox squirrel. That's why the algorithm equated tricycles with bicycles and rings, and keyboards and calculators with digital clocks.
The results aren't particularly surprising, but the database's sturdiness offers an idea of just how many different ways image processing algorithms may go wrong. This is an essential research target as vision-based systems are used in more dangerous contexts.
Automated warehousing and self-driving vehicles are two of the most prominent applications of these technologies. An image identification problem that might have grave consequences for driverless cars that rely on computer vision was discovered earlier this year when scientists rotated pictures of 3D objects to mislead the deep convolutional neural network.
This is only one hostile scenario of many where this vulnerability might be a serious issue. That's just a sliver of what's out there.
Reviews