Dave Gershgorn

Journalist

Dave Gershgorn is a writer and photographer based in New York City. He works as a reporter for Quartz, with a personal focus on translating artificial intelligence research to a mainstream audience. He was previously the Assistant Technology Editor of Popular Science. When Dave is a photographer, he likes to shoot long, character-driven features. His work has been featured in QuartzPopular ScienceThe New York Times and the Wall Street Journal.

 

On the weekends, Dave likes hot dogs and any other kind of dog. 

The Chinese phrenology paper is an important reminder of AI ethics and not much more

If you're reading this, odds are you're familiar with the paper "Automated Inference on Criminality using Face Images" from Shanghai Jiao Tong University. If you're not familiar, it's basically some research that uses machine learning as modern-day calipers, with the conclusion that we can judge criminality from facial features.

Obviously, this is not true. It's not even that interesting of a paper, to be honest.

But there are a few things worth mentioning. Foremost, this paper did not go unnoticed. Every day, dozens if not hundreds of ML researchers scour the research repository ArXiv looking for the latest published papers, and people love to call out weird or flawed research on Twitter.

Google engineer François Chollet kind of likens it to crowdsourced peer review.

Great, we caught it. Now what?

Here's the reason I'm writing this on a personal blog and not Qz.com, where I'd usually post (and might put something up Monday). 

This is a good case study on how A.I. research can be unethical, but this just a symptom of having a litany of open, flexible tools. No matter how troubling Sam Biddle finds this work, it was done by two academics and posted on a non-peer-reviewed, public repository. It has not been endorsed by anyone but the authors, and implicitly by the university which let them publish under their name. (To me, that's the most troubling aspect of this whole ordeal.) Major players in field of artificial intelligence have spent considerable resources to ensure machine learning is easy to access.

I understand artificial intelligence can seem scary and intimidating, but this AI is no more to blame for this than calipers and equations used in phrenology. 

Biddle's story does a good job of summarizing why phrenology is bad and by extension, why this paper is bad. He's right! But at the same time, the story paints the industry in broad strokes.

The story begins:

The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction.

Incidentally, today New York University hosted dozens, if not hundreds, of machine learning and artificial intelligence researchers to talk about how to make machine learning more ethical. The panels ranged from raising questions about algorithmic policing to implicit bias in text-based AI. The title of the conference is Fairness, Accountability, and Transparency in Machine Learning, and was sponsored by Google, Microsoft, and the Data Transparency Lab.

Microsoft Research's Kate Crawford, who was quoted in Biddle's piece, said at the FATML conference, "We have a responsibility to say what work is ethical and what work is not."

However, it should be said that this paper isn't without a lesson. Machine learning is an incredibly powerful tool, able to rip through data and draw biased, misleading or just wrong conclusions at a dizzying pace if built incorrectly. Academics and AI researchers reacted exactly as they should. But I don't buy the argument that this research creates some kind of market for this technology that did not exist previously.

The way this story is told is important. The way that people understand AI is important. The way people understand the engineers building this AI is important. All I ask is that we're smart in the way we cover papers like this.

Think I'm wrong? Did I miss something here? Let me know, I'm on Twitter.