INSIGHTS INTO MACHINE VISION
aiBias is an Image Recognition app that detects and classifies(tags) objects in a photo using a popular off the shelf PaaS(Platform as a Service).
After a running a few tests, I began to notice that age, emotion, race and gender were categories that occasionally would appear.
This made me more broadly curious about the practical implications of how AI/Machine Learning is designed and implemented, and the impact these choices could have in the future. There are many different image recognition platforms available to developers that approach the problem in differing ways. Some utilize metadata from curated image datasets, some use images shared on social media, and some use human resources(Amazon Mechanical Turk) to tag photos.
How do these models differ with respect to inherent cultural, religious, and ethnic biases? The complicated process of classifying abstract more notions such as race, gender or emotion leave a lot of interpretation up to the viewer. Not to mention, the problem of the Null Set, in which ambiguous classifications may not be tagged leaving crucial information out of predictive models.
What does AI think about gender and race? What correlations will it draw?
What are the implications for implementation biases in healthcare, law enforcement, insurance, finance, military, and employment?
How is the data seen as significant, and under what circumstances should it be used?
Is it possible that an AI be designed such that it is "color blind"?
Join the conversation on our blog: