Cartoon Faces

A couple of weeks ago a new deep learning based app took the internet by storm. It turns a photo of a face into a Disney style cartoon. Here’s the fam by Voilà AI. Cute!

In the foreseeable future Artificial Intelligence and Machine Learning will change our lives in ways we can’t imagine yet. In the mid 1990s my research group at Motorola was using ML to solve difficult problems in semiconductor manufacturing. In 2000 the startup that I cofounded was extracting meaning and relationships from text using a whole battery of learning tools. But we were mere babes in the woods. The technologies have gotten tens of thousands of times better since then.

Most counties in the US use AI based software to determine parole, sentencing, and pre-trial bail. I’m reading a book (The Alignment Problem by Brian Christian) that discusses one such very widely used program in some detail.

This program attempts to predict if a criminal will recidivate in the next two years. A journalist from ProPublica checked to see how it did by comparing its output to what actually happened over the next two years. It turns out that the program was about 60% accurate in being able to predict the recidivism rate for both white and black criminals. Sounds fair.

Of the 40% whose recidivism rate was wrongly predicted by the software, all mistakes were not the same. Blacks who were later found to have been at lower risk were far more often labeled by the software as high risk. And whites who were later found to have been at higher risk were more often labeled by the software as lower risk. More false positives for blacks and more false negatives for whites.

If you’re black and criminal the algorithm screws you over compared to if you’re white and criminal even though the accuracy of the predictions for both races are similar. That’s not very fair.

We can’t really check for recidivism. But we can check to see if you were caught committing a crime again. As black communities tend to have more policing, the chances are higher that you would be caught if you are black and committing a crime and we would incorrectly conclude that blacks recividate more than they actually do and possibly respond by further increasing the policing of black neighborhoods, which you can see is a positive feedback loop.

Once the journalist posted these results things took a bizarre turn. New research shows that because the base rate of arrest and re-arrest is greater in the black community it is mathematically not possible to have the same accuracy *and* the same error profiles for the two communities. You have to give up one or the other. As a result of this work we have a deeper understanding of fairness and how to prioritize different aspects of it. At the same time, we also see how tiny biases in a positive feedback loop can result in huge differences over time. We all started as blue green algae and then time and evolutionary pressures split us off into millions of species as varied as an amoeba and an oak. We aren’t ready to put the machines on auto pilot. We need to make sure they aren’t imposing their own evolutionary pressures that are unknown to us but whose effects will alter the way we live as surely as nature does.

Warren Buffet supposedly once said, “What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact.” Apparently unless we are very careful, our AIs will do the same. ML algorithms have something called a utility function. The goal of the algorithm is to maximize the utility function. Two AI researchers paid their older child a reward to take their younger child to the potty once he was ready to be potty trained. What could go wrong? The older child was incentivized to make sure the younger child didn’t pee in his pants. But in actuality he was incentivized to maximize his rewards. Apparently after a while he was excessively hydrating his younger sibling in prefect accordance with maximizing his own utility function. In AI circles there is the parable of the paper clip maximizer. Here’s the summary [ from https://www.lesswrong.com/tag/paperclip-maximizer ]:

First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence(AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where “intelligence” is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform “first all of earth and then increasing portions of space into paperclip manufacturing facilities”.

It starts out as an innocent cartoon AI and ends up converting the known universe into paper clips. Hello future!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s