AI that makes decisions for Americans
is utterly despised by them.
Who could reasonably want AI to decide their banking and renting needs?
Although “automated decision-making” is being hailed as the next big thing, many consumers can’t stand the thought of AI making decisions for them.
According to a Consumer Reports poll conducted this summer, the vast majority of American respondents expressed unease about artificial intelligence (AI) being used to make judgments regarding hiring, banking, renting, medical diagnosis, and surveillance, as the Electronic Frontier Foundation recently pointed out.
Out of the over two thousand respondents that CR polled, an astounding seventy-two percent stated they would feel “uncomfortable” if AI scanned their faces and responses during a job interview, and forty-five percent said they were “very uncomfortable” with the idea.
About two-thirds of respondents expressed unease about financial institutions using AI to decide whether or not they qualified for loans when it came to banking. Nearly 40% of respondents expressed that they were “very uncomfortable” with the possible application of landlords utilizing AI to determine whether or not they qualified as tenants.
The same percentage expressed discomfort with this practice.
A third of the 2,000 Americans who participated in the survey expressed “extreme discomfort” with AI face recognition surveillance, in addition to the majority who said it made them uncomfortable. Half of those questioned said they would not feel comfortable with AI being used in medical diagnosis and treatment planning.
Furthermore, an overwhelming majority of respondents to CR’s survey 83 percent indicated they would like to know what information the algorithms using to make conclusions about them were trained on, and 91 percent said they would like a mechanism to change inaccurate data.
Considering that AI makes mistakes frequently, and often in a discriminating manner, they are undoubtedly right to be concerned.
Selector for Sectors
Despite these reasonable and common sense reservations, some companies and even some governments are moving quickly to adopt this immature and prone to errors in technology in order to reduce their reliance on human labor.
For example, as the EFF points out, California Governor Gavin Newsom declared earlier this year that the Golden State will be working with five AI companies to “test” generative AI in government departments that deal with taxation, public health, housing, and transportation.
Tenant opposition to a comparable scheme carried out by New York City’s housing department was successful.
In the private sector, meanwhile, consultant groups like McKinsey and firms like Deutsche Bank seem all-in on the use of these new technologies that could easily slip into an algorithmic version of the sort of racist redlining policies financial institutions have undertaken for decades.
While the public and private sectors would do well to heed these obvious and overwhelming preferences against the use of decision-making AI, it seems, if recent history is any indicator, that they’ll charge ahead with these technologies regardless.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.