TED Talk >> Zeynep Tufekci: Machine intelligence makes human morals more important

Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
机器智能来了。我们用计算机来做各种决策,包括人们面临的新决策。我们向计算机询问多解的、主观的、开放性的或有价值的问题。
We're asking questions like, "Who should the company hire?" "Which update from which friend should you be shown?" "Which convict is more likely to reoffend?" "Which news item or movie should be recommended to people?"
我们会问,“我们公司应该聘请谁?” “你该关注哪个朋友的哪条状态?” “哪种犯罪更容易再犯罪?” “应该给人们推荐 哪条新闻或是电影?”
Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.
看,是的,我们使用计算机已经有一段时间了,但现在不一样了。这是历史性的转折,因为我们在这些主观决策上无法主导计算机,不像我们在管理飞机、建造桥梁、 登月等问题上,可以主导它们。飞机会更安全吗?桥梁会摇晃或倒塌吗?在这些问题上,我们有统一而清晰的判断标准, 我们有自然定律来指导。但是在复杂的人类事务上,我们没有这样的客观标准。
Question
- Why is using machine intelligence to solve subjective problems an issue?
> There are no guidelines for subjective issues. - With the development of machine intelligence
> algorithms are now being used to answer subjective questions - To provide a benchmark for something means
> to set a standard for it
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.
让问题变得更复杂的,是我们的软件正越来越强大,同时也变得更加不透明,更加复杂。最近的几十年,复杂算法已取得了很大进步,它们可以识别人脸,它们可以辨认笔迹,它们可以识别信用卡欺诈,可以屏蔽垃圾信息,它们可以翻译语言,他们可以通过医学图像识别肿瘤, 它们可以在国际象棋和围棋上击败人类。
Much of this progress comes from a method called "machine learning." Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."
类似的很多发展,都来自一种叫“机器学习”的方法。机器学习不像传统程序一样,需要给计算机详细、 准确、细致的逐条指令。它更像是你给系统喂了很多数据,包括非结构化数据,比如我们在数字生活中产生的数据。系统通过翻腾这些数据来学习。重要的是,这些系统不再局限单一答案。他们得出的不是一个简单的答案,而是概率性的:“这个更像是你在寻找的。“

Now, the upside is: this method is really powerful. The head of Google's AI systems called it, "the unreasonable effectiveness of data." The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking.
它的优势是:它真的非常强大。Google人工智能系统的负责人称它为:“不可思议的数据效率”。缺点在于,我们无法清楚的了解 系统学到了什么,事实上,这也正是它的强大之处。不像是给计算机下达指令,更像是在训练一个机器狗,我们无法精确的了解和控制它。这就是我们遇到的问题。人工智能会出错,这是一个问题。但他们得出正确答案,又是另一种问题。因为我们面对主观问题,是不应该有答案的。 我们不知道这些机器在想什么。
Question
- Why is it a problem when machine intelligence gets things right?
> People can't examine how the system reaches its conclusion - What is one characteristic of traditional programming?
> It requires explicit instructions. - If a method or argument is probabilistic, it is
> based on what is most likely to be true.
So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
所以,考虑一下招聘算法——通过机器学习构建的招聘系统。这样的系统会用员工现有的数据进行自我培训,参照公司的优秀员工来寻找和招聘新人。听起来很好。有次我参加了一个会议,会上聚集了很多人力资源部的经理和总监,都是高管,让他们使用这样的招聘系统。他们都非常兴奋,认为这可以让招聘变得更加客观,从而减少偏见, 给女性和少数族裔更多的机会,减少人力资源的偏见。
And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!" I'd be puzzled by the weird timing. It's 4pm. Lunch? I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender.
你知道的,招聘是存在偏见的,我也很清楚。在我刚开始做程序员的时候,我的直接主管会来找我,在早晨很早或下午很晚的时候,说,“ Zeynep,我们去吃午饭!” 我就被这奇怪的时间给搞糊涂了,现在是下午4点,吃午饭?我当时很穷,所以不会放过免费的午餐。 后来我才想明白原因,我的主管们没有向他们的上级坦白,他们雇了一个十多岁的小女孩来做高级的编程工作,一个穿着牛仔裤,运动鞋工作的女孩。我的工作做得很好,我只是看起来不合适,年龄和性别也不合适。
So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference.
所以,忽略性别和种族的招聘,听起来很适合我。但是这样的系统会带来更多问题。当前,计算机系统能根据零散的数据,推断出关于你的一切,甚至你没有公开的事。 它们可以推断你的性取向,你的性格特点,你的政治倾向。它们有高准确度的预测能力,记住,是你没有公开的事情,这就是推断。
Question
- Why were people in the conference excited about the hiring algorithm?
> It could remove bias from the hiring process - To make an inference means
> to form an opinion based on the available information.
