Kate Crawford
Co-founder of New York University’s AI Now Institute, which examines the social implications of AI, and senior principal researcher at Microsoft Corp.’s research division
In a decade, facial-recognition technologies will be seen as inherently corrosive to civil liberties and democratic engagement. Attempts to detect or predict people’s criminality or predict people’s internal emotional state by looking at pictures of their faces will ultimately be seen as unscientific and discriminatory. We need much more than ethical guidelines to ensure that AI is going to be safe and sustainable in the future. What are the labor practices required to make an AI system work? Does it rely on gig workers and warehouse workers [and “click workers,” who tag data to train machine-learning algorithms] to put themselves in unsafe workplaces to make AI systems function? If it does, don’t build it. Consider the environmental costs. Are you using huge amounts of computational power to make small improvements to AI? Who is made more powerful by the creation of this AI system? If the answer is that it puts more power into the hands of the already very powerful, then I think we should be deeply skeptical.
Adam Wenchel
Co-founder of AI technology startup Arthur AI Inc., and founder of Capital One Financial Corp.’s Center for Machine Learning
In 10 to 25 years, there will be several high-profile incidents of AI systems going horribly wrong, with catastrophic effects. As AI systems start to control more areas in stock trading and health care, for example, one of these systems is going to go haywire. They already make mistakes all the time, but there will be a mistake that will cost significant money for a company and cause significant impacts on end users. The silver lining is that people will recognize the need to build in the proper guardrails from day one. [Companies] need to have AI oversight committees with data scientists, engineers, as well as anthropologists and sociologists, to study how the systems are going to affect the communities they’re designed to serve. For the majority of systems that are deployed, it’s very tough to understand why a model makes a decision. We need to have explainability for AI. That’s important for any AI system where you’re affecting people’s livelihoods, like finance or health care.
Andrew Moore
Vice president of AI and industry solutions at Google Cloud and former dean of Carnegie Mellon University’s School of Computer Science
We’ll have the technology to make sure that every person gets what would be regarded right now as a kind of celebrity concierge-type treatment from an AI-based digital assistant. When a customer is talking to a telecommunications provider or a physician’s office, they’ll expect extremely friendly, effective, quick-thinking help, and this is technologically going to be possible. The digital assistant will creatively solve their problems, put them in touch with the right folks and be as creative as a fancy personal assistant of the kind that Tom Cruise or someone has right now. It’s about 25 years away. Also, it’s important to avoid creating a single subsection of a population which builds AI. Part of our push of building underlying machine-learning platforms this year has been very much about making it so you don’t need fancy places like Stanford and MIT to learn the technology to be an AI builder.
Apoorv Saxena
Global head of AI and machine-learning services at JPMorgan Chase & Co.
We will see tremendous growth in terms of how AI is used in every part of the economy, and financial services is no different. You’ll see AI making tremendous progress, for example, in the ability to detect fraud, manage and understand risk, and advise customers on their retirement planning. There is open debate around [whether] we will have an AI system that will be as smart as a human being, which is also called artificial general intelligence. I don’t believe that’s going to happen within 25 years. The complexity that’s needed to build that is not there. We will have a lot more sophisticated virtual agents who will understand what you’re saying in real time within 15 years.
Renée Cummings
Criminologist, AI ethicist and founder of Urban AI, a consultancy that promotes diversity in AI and the inclusion of urban culture and demographics in the design and development of AI
Every aspect of the criminal justice system—police, judiciary and corrections—is going to be touched by AI. The criminal justice industry is going to want everything to be smarter, more intelligent and faster. We’re going to see advancement in criminal justice decision-making using AI for mediation, litigation, jury selection. AI is a big game-changer when it comes to expediency and efficiency of the criminal justice system. The ability of the technology to operate at speed and scale can have extraordinary promise and positive impact on criminal justice; the perils are also as great. Algorithmic decision-making systems [should not be] built with data that perpetuates systemic racism and discrimination. Diversity, equity and inclusion must be involved at every stage of the life cycle when it comes to design, development and deployment of AI.
Tess Posner
CEO of AI4ALL, a nonprofit focused on increasing diversity and inclusion in AI
According to [job site] Indeed Inc., “machine learning engineer” was the fastest growing job on its platform, with 344% growth [in the number of postings between 2015 and 2018]. Only 14% of researchers in AI are women, globally. The numbers are even worse for people of color. In some ways, AI is becoming a gatekeeper to the economy, deciding who gets a job, who gets access to a loan, who gets into the country. Instead of teaching students about how to get the right data [to build AI systems], we [should] ask them to think about how that dataset might be biased, and where the data is collected. Often, AI may have a positive intention behind it, but have unintended consequences.
These interviews have been condensed and edited.
Illustrations by Kyle Hilton.
Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
"bring" - Google News
August 10, 2020 at 10:09PM
https://ift.tt/31RM6pR
The Changes AI Will Bring - The Wall Street Journal
"bring" - Google News
https://ift.tt/38Bquje
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update
Bagikan Berita Ini
0 Response to "The Changes AI Will Bring - The Wall Street Journal"
Post a Comment