in

Using AI to find bias in AI


In 2018, Liz O’Sullivan and her colleagues at a leading artificial intelligence start-up began working on a system that could automatically remove nudity and other explicit images from the Internet.

He sent millions of photos online to workers in India, who spent weeks adding tags to explicit content. The data combined with the photographs will be used to teach AI software How to recognize pornographic images. But after the photos were tagged, Ms. O’Sullivan and her team noticed a problem: Indian workers had classified all images of gay couples as pornographic.

For Ms O’Sullivan, this moment showed how easily – and often – bias can creep into artificial intelligence. It was a “brutal game of whack-a-mole”, she said.

This month, Ms. O’Sullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, that provide tools and services designed to identify and remove bias from AI systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warning Against the sale of AI systems that were racially biased or could prevent individuals from obtaining employment, housing, insurance or other benefits. A week later, the European Union unveiled draft rules that Penalize companies for offering such technology.

It is not clear how regulators can police bias. Last week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released Proposal Detailing how businesses can fight bias in AI, including changes in the way technology is conceived and built.

Many in the tech industry believe that businesses should start preparing for action. “Some sort of law or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for Software Alliance, a trade group that represents some of the biggest and oldest software companies. One of the stories is, it destroys the trust and confidence of the public.”

Over the past several years, studies have shown that facial recognition services, health care system and Even Talking Digital Assistant May be biased against women, people of color and other marginalized groups. Amid growing complaints over the issue, some local regulators have already taken action.

At the end of 2019, state regulators in New York opened an inquiry A UnitedHealth Group study later found that an algorithm used by a hospital prioritized care for white patients compared to black patients, even when white patients were healthy. state last year Apple Card Credit Service Checked After claims it was discriminating against women. Regulators ruled that Card-operating Goldman Sachs did not discriminate, while the status of the UnitedHealth investigation is unclear.

Tyler Mason, a spokesman for UnitedHealth, said the company’s algorithms were misused by one of its partners and were not racially biased. Apple declined to comment.

According to PitchBook, a research firm tracking financial activity, more than $100 million has been invested over the past six months in companies exploring the ethical issues associated with artificial intelligence, up from $186 million last year.

But efforts to address the problem reached a critical point this month when the Software Alliance offered a detailed framework for fighting bias in AI, including the recognition that some automated technologies require routine monitoring from humans. is required. The trade group believes the document could help companies change their behavior and show regulators and lawmakers how to control the problem.

Although they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also provide tools to fight it.

Ms O’Sullivan said there was no simple solution to bias in AI. A difficult issue that some in the industry question is whether the problem is as pervasive or as harmful as she believes.

“The changing mindset doesn’t happen overnight — and that’s even more true when you’re talking about large companies,” she said. “You are trying to change not just one person’s mind, but many minds.”

When she began advising businesses on AI bias more than two years ago, Ms O’Sullivan often faced skepticism. Many executives and engineers advocated “fairness through ignorance”, arguing that the best way to create equitable technology was to ignore issues such as race and gender.

Increasingly, companies were building systems that learned tasks by analyzing large amounts of data, including photos, sounds, text, and figures. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms O’Sullivan observed following the tagging done in India, bias can creep into a system when designers choose the wrong data or incorrectly sort it. Studies suggest that face recognition services can be biased against women and people of color when trained on photo collections dominated by white men.

Designers may be blind to these problems. Workers in India – where homosexual relations were still illegal at the time and where attitudes towards gays and lesbians were very different from those in the United States – were categorizing photographs as they saw fit.

Ms O’Sullivan spotted the flaws and pitfalls of artificial intelligence while working for Clarify, the company that runs the tagging project. She said she left the company after realizing that it was building systems for the military that she believed could eventually be used to kill her. Clarify did not respond to a request for comment.

She now believes that after years of public complaints over bias in AI – not to mention the threat of regulation – attitudes are changing. In its new framework to curb harmful bias, the Software Alliance cautions against unintentional fairness, saying the argument is not correct.

“They are acknowledging that you need to flip the rocks over and see what is underneath,” said Ms O’Sullivan.

Still there is opposition. She recently said about a skirmish on Google, where two ethics researchers were kicked out, was indicative of the condition of many companies. Efforts to fight prejudice often clash with corporate culture and the relentless push to build new technology, get it out the door and start making money.

It is still difficult to know how serious the problem is. “We have very little data needed to model the broader Social Security issues with these systems, including bias,” said Jack Clark, one of the authors of the AI ​​Index. “Many things that the average person cares about—such as fairness—are not yet being disciplined or measured extensively.”

Ms. O’Sullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Choudhury, a well-known AI ethics researcher, before joining Spent years at business consultancy Accenture. Twitter.

While other start-ups, such as Fiddler AI and Weights & Biases, provide tools for monitoring AI services and identifying potentially biased behavior, Parity’s technology aims to analyze the data, technologies, and methods that a business can use for its own business. Uses for building services and then indicates areas of risk. and suggest changes.

The tool uses Artificial Intelligence technology which in itself can be biased, reflecting the double-edged nature of AI – and the difficulty of Ms O’Sullivan’s task.

The bias detection tools in AI are imperfect, just as AI is imperfect. But the power of such a tool, she said, is to pinpoint potential problems — to get people to take a closer look at the issue.

Ultimately, she explained, the goal is to create a broader dialogue between people with broader views. Problems arise when the problem is overlooked – or when those discussing issues share the same point of view.

“You need diverse perspectives. But can you really have diverse perspectives in one company?” Ms. O’Sullivan asked. “It’s a very important question, I’m not sure I can answer it.”



Source link

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

China’s manufacturing slowed in June amid weakness in exports

Russian cargo ship leaves for International Space Station