![]() In other words, the misfire could be caused by the interaction between what the network is asked to process and the actual patterns involved. "Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are," Buckner said. They're rare and are called "adversarial" because they are often created or discovered by another machine learning network - a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them. They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless - misidentifying one animal as another - to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed.Ī philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks.Īs machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call "adversarial examples," when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |