/  Technology   /  Artificial Intelligence   /  Artificial intelligence does not like the poor and disenfranchised
Artificial Intelligence(i2tutorials.com)

Artificial intelligence does not like the poor and disenfranchised

When it comes to the greatest threat to humanity caused by the spread of artificial intelligence, the answer actually has nothing to do with robots. On the contrary, the biggest obstacle is the biased algorithm, and, like all other bad things, artificial intelligence has a disproportionate impact on the poor and marginalized.

Machine learning algorithms, whether in the form of “artificial intelligence” or simple shortcuts for filtering data, cannot make rational decisions because they do not have the ability to think rationally. Despite this, US government agencies have made machine learning algorithms responsible for decision-making issues that may have a profound impact on human life, which has become an unethical problem that is difficult to understand.

Example:

when an algorithm manages the inventory of a grocery store, machine learning can help humans do something. After all, managers may not be able to track millions of items by one person. However, in some cases, machine learning can be a disaster. For example, when it is used to deprive someone of their freedom or to limit their children, we feel that humans give it too much power.

Two years ago, Propublica studied the COMPAS recidivism algorithm. The algorithm is used to predict the likelihood of a prisoner or accused being re-offended after being released and used for bail, sentencing and parole. The results of the survey showed that the false positive rate of the black defendant (45% error rate) (marked as “high risk” but not re-offended) was nearly twice that of the white defendant (24%). In this regard, ProPublica published a stern article, which opened the prelude to the debate on artificial intelligence bias.

New algorithm continues to spread old bias in child welfare cases

In other words, the algorithm itself does not have the insight to objectively evaluate human morality. Instead, they simply complete the prediction model from the data obtained , and the data depends on what information is available and which is what the programmer thinks. related. This means that those who design predictive child welfare algorithms, no matter how good, will embed the prejudices that exist in the individual and history into the equations, which will perpetuate the bias. The worst outcome of prejudice is that thousanhttp://ds of unwitting families are affected.

US free media people “get deeply hurt”

We can’t solve social problems with “black box” AI or biased algorithms, which is a bit like using fire to extinguish fire. Unless we can develop 100% unbiased AI, using them to judge whether to deprive a person’s freedom, or to decide the child’s monitoring power or future development, is a wrong decision.

Leave a comment