![]() ![]() My initial attempt failed, and I & several others failed for over more than half a decade to find any primary source (just secondary sources citing each other). Is it?Īs it would be so useful a cautionary example for AI safety/alignment research, and was cited to that effect by Eliezer Yudkowsky but only to a secondary source, I decided to make myself useful by finding a proper primary source for it & see if there were more juicy details worth mentioning. It would be a good & instructive story… if it were true. In this context, a cautionary story is often told of incautious researchers decades ago who trained a NN for the military to find images of tanks, only to discover they had trained a neural network to detect something else entirely (what, precisely, that something else was varies in the telling). With this approach of “neural net all the things!”, the question of to what extent the trained neural networks are useful in the real world and will do what we want it to do & not what we told it to do has taken on additional importance, especially given the possibility of neural networks learning to accomplish extremely inconvenient things like inferring individual human differences such as criminality or homosexuality (to give two highly controversial recent examples where the meaningfulness of claimed success have been severely questioned). ![]() Convolutional neural networks are now applied to not just ordinary tasks like sorting cucumbers by quality but everything from predicting the best Go move to where in the world it was taken to whether a photograph is “interesting” or “pretty”, not to mention supercharging traditional tasks like radiology interpretation or facial recognition which have reached levels of accuracy that could only be dreamed of decades ago. I suggest that dataset bias is real but exaggerated by the tank story, giving a misleading indication of risks from deep learning and that it would be better to not repeat it but use real examples of dataset bias and focus on larger-scale risks like AI systems optimizing for wrong utility functions.ĭeep learning’s rise over the past decade and dominance in image processing tasks has led to an explosion of applications attempting to infer high-level semantics locked up in raw sensory data like photographs. I collate many extent versions dating back a quarter of a century to 1992 along with two NN-related anecdotes from the 1960s their contradictions & details indicate a classic “urban legend”, with a probable origin in a speculative question in the 1960s by Edward Fredkin at an AI conference about some early NN research, which was then classified & never followed up on. This story is often told to warn about the limits of algorithms and importance of data collection to avoid “dataset bias”/“data leakage” where the collected data can be solved using algorithms that do not generalize to the true data distribution, but the tank story is usually never sourced. A cautionary tale in artificial intelligence tells about researchers training an neural network (NN) to detect tanks in photographs, succeeding, only to realize the photographs had been collected under specific conditions for tanks/non-tanks and the NN had learned something useless like time of day. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |