Norman AI was developed by the MIT researchers with the purpose of demonstrating that algorithms cannot be biased and unfair unless biased data is fed into it. For example, the same algorithm, if it is trained on another category of data, can see different things in an image.
Norman is an AI that is trained to perform image captioning, which is the process of generating a textual description of an image based on the actions and objects in the image. There are two parts; the first part consists of the extraction of the features out of the image, where Convolutional Neural Networks (CNN) are used. The second part consists of translating the features given in the first part into a natural sentence, using Recurrent Neural Networks (RNN), especially Long Short-Term Memory (LSTM).
In this experiment, Norman was trained on image captions from an infamous subreddit that is dedicated to images and videos about death. However, no images of real people were used due to ethical concerns, the researchers said. Additionally a standard AI model was trained with the MSCOCO dataset, a large scale dataset for training image captioning systems. MIT researchers compared Norman's responses with the standard AI on Rorschach inkblots, a psychological test that analyzes a person's perceptions of said inkblots, to detect disorders.
The interesting part is that the results were very different. The standard AI saw "A black and white photo of a small bird" whereas Norman saw "Man gets pulled into dough machine". And for another inkblot, standard AI generated "A black and white photo of a baseball glove" whereas Norman wrote "Man is murdered by machine gun in broad daylight". Similarly, for another inkblot, standard AI saw "A group of birds sitting on top of a tree branch" whereas Norman saw "A man is electrocuted and catches to death". More examples can be found in the link below.
MIT said that Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms".
Following this experiment, it was shown that the data provided to an algorithm sometimes matters more than the algorithm itself.
Sources:
Niciun comentariu:
Trimiteți un comentariu