The Never-ending Battle With A.I.

The true fear behind A.I.

The fear of the potential takeover of artificial intelligence lingers in the population as technology advances. Classic Hollywood films, Terminator, IRobot, and The Matrix show pop-culture’s fear of highly sophisticated technology having a mind of its own. The true worry of artificial intelligence, A.I., should be in its biased results. 


Biased results in A.I. is a problem that will be nearly impossible to avoid. This problem can cause skewed and controversial results. Due to the data being shown and perceived by the A.I. itself.


Facial recognition, self-driving cars, and, in the possible near future, robot aides that clean homes across the world, are all examples of A.I. being used to excel basic needs and secure safety. 


A.I. operates by analyzing an amount of data that its given in order to do a certain action. It analyzes the given data and then creates probabilities of its success rate in committing the action. Essentially, it learns how to do a certain action by using the resources its exposed to. 


Biases in A.I. are almost impossible to avoid, since A.I. doesn’t have a conscience of its own to make judgements. It also can’t control what resources it’s exposed to, it all comes down to what the programmers choose. Therefore, biases creep into the A.I.’s algorithm without it being intentional. Where A.I. becomes controversial in its biased results is when its data involves gender and race.


An example of A.I. being controversially biased is when Amazon used a hiring algorithm, which was abandoned in 2018. These hiring algorithms use training data, but the variable of gender, race, and sexual orientation were removed. Although these controversial variables were avoided, the training data became biased. It became biased because the training data may have involved possible social inequalities and biased human choices, according to the Harvard Business Review.  


Since A.I. algorithms are becoming more complex as time progresses, the problem with biased results will continue to stick around as its data comprehension and storage increases.