• December 9, 2021

RE:WIRED 2021: Timnit Gebru Says Artificial Intelligence Needs to Slow Down

Artificial intelligence researchers are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Organizations rely on AI to approve a loan or shape a defendant’s sentence. But the foundations upon which these intelligent systems are built are susceptible to bias. Bias from the data, from the programmer, and from a powerful company’s bottom line can snowball into unintended consequences. This is the reality AI researcher Timnit Gebru cautioned against at a RE:WIRED talk on Tuesday.

“There were companies purporting [to assess] someone’s likelihood of determining a crime again,” Gebru said. “That was terrifying for me.”

Gebru was a star engineer at Google who specialized in AI ethics. She co-led a team tasked with standing guard against algorithmic racism, sexism, and other bias. Gebru also cofounded the nonprofit Black in AI, which seeks to improve inclusion, visibility, and health of Black people in her field.

Last year, Google forced her out. But she hasn’t given up her fight to prevent unintended damage from machine learning algorithms.

navigate to these guys
see this here
check my site
anchor
other
additional hints
look at this web-site
their explanation
internet
find more
Read More Here
here
Visit Website
hop over to this website
click
her latest blog
This Site
read review
try here
Clicking Here
page
read this post here
More Bonuses
recommended you read
go to this web-site
this
check that
Go Here
More hints
you could check here
Continued
More Help
try this
you could try here
website here
useful source
read the full info here
Discover More
click resources
over here
like this
Learn More
site web
navigate to this web-site
pop over to this website
Get the facts
our website
great site
try this out
visit the website
you could look here
content
go to this site
website link
read this
official statement
reference
check out the post right here
additional info
my link
additional reading
important source
you can check here
this link
see post
next
click reference
visit site
look here
try this web-site
Going Here
click to read
check this site out
go to website
you can look here
read more
more
explanation
use this link
a knockout post

Tuesday, Gebru spoke with WIRED senior writer Tom Simonite about incentives in AI research, the role of worker protections, and the vision for her planned independent institute for AI ethics and accountability. Her central point: AI needs to slow down.

“We haven’t had the time to think about how it should even be built because we’re always just putting out fires,” she said.

As an Ethiopian refugee attending public school in the Boston suburbs, Gebru was quick to pick up on America’s racial dissonance. Lectures referred to racism in the past tense, but that didn’t jibe with what she saw, Gebru told Simonite earlier this year. She has found a similar misalignment repeatedly in her tech career.

Gebru’s professional career began in hardware. But she changed course when she saw barriers to diversity and began to suspect that most AI research had the potential to bring harm to already marginalized groups.

“The confluence of that got me going in a different direction, which is to try to understand and try to limit the negative societal impacts of AI,” she said.

For two years, Gebru co-led Google’s Ethical AI team with computer scientist Margaret Mitchell. The team created tools to protect against AI mishaps for Google’s product teams. Over time, though, Gebru and Mitchell realized they were being left out of meetings and email threads.

In June 2020, the GPT-3 language model was released and displayed an ability to sometimes craft coherent prose. But Gebru’s team worried about the excitement around it.

“Let’s build larger and larger and larger language models,” said Gebru, recalling the popular sentiment. “We had to be like, ‘Let’s please just stop and calm down for a second so that we can think about the pros and cons and maybe alternative ways of doing this.’”

Her team helped write a paper about the ethical implications of language models, called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

Others at Google were not happy. Gebru was asked to retract the paper or remove Google employees’ names. She countered with an ask for transparency: Who had requested such harsh action and why? Neither side budged. Gebru found out from one of her direct reports that she “had resigned.”

Leave a Reply

Your email address will not be published. Required fields are marked *