AI

2020 in Review: 10 AI Failures

Read more at syncedreview.com

The global artificial intelligence market is expected to top US$40 billion in 2020, with a compound annual growth rate (CAGR) of 43.39 percent, according to Market Insight Reports. AI’s remarkable achievements and continuing rapid expansion into new domains are undeniable. However, as with most nascent technologies, there are still bugs to work out.

This is the fourth Synced year-end compilation of “Artificial Intelligence Failures.” Our aim is not to shame nor downplay AI research, but to look at where and how it has gone awry with the hope that we can create better AI systems in the future.

Synced 10 AI failures of 2020:

AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash

In July, the AI-powered tool Genderify — designed to identify a person’s gender by analyzing their name, username or email address — was shut down just a week after launch. Genderify creator Arevik Gasparyan pitched the platform as a “unique solution that’s the only one of its kind available in the market,” where businesses could “obtain data that will help you with analytics, enhancing your customer data, segmenting your marketing database, demographic statistics…”

It was the only one of its kind available in the market for a reason. A Genderify backlash rapidly spread on social media, alleging built-in biases. Ali Alkhatib, a research fellow at the Center for Applied Data Ethics, tweeted that when he typed in the word “professor,” Genderify predicted a 98.4 percent probability for males. Meanwhile, “stupid” returned a 61.7 percent female prediction. In other cases, adding a “Dr” prefix to frequently-used female names resulted in male-skewed assessments.

As data scientists like to say, “Garbage in, garbage out.” In the book Invisible Women: Exposing Data Bias in a World Designed for Men, author Caroline Criado-Perez critiques such encoded biases. “Artificial intelligence that helps doctors with diagnoses, that scans through CVs, even that conducts interviews with potential job applicants, is already common. But the AIs have been trained on data sets that are riddled with data gaps – and because algorithms are often protected as proprietary software, we can’t even examine whether these gaps have been taken into account. On the available evidence, however, it certainly doesn’t look as if they have.

94-Year-Old Granny Hoisted to Use Bank’s Facial Recognition System

Facial recognition technology is becoming mainstream in China, where it the standard for mobile payment systems and banking services. In a video that went viral on Chinese social media, a 94-year-old grandmother is seen being lifted up by her son in order to reach a facial recognition camera and activate her social security card at a bank in Hubei province.

Although the younger, tech-savvy generation takes today’s conveniences for granted, the elderly often struggle to cope. Issues involving senior populations have emerged in many scenarios: they might have difficulty registering at hospitals, withdrawing savings, or paying electricity bills, as such services have largely shifted online or are now delivered via machines. In guidelines released by China’s State Council, the message is clear, “bridging the digital divide in sectors deemed crucial for seniors is the first step in a three-part campaign to mitigate the impact of sweeping digitalization on older people.”

Malfunctioning Service Robots

A video that went viral on Chinese social media platform Weibo shows a robot tumbling down an escalator, crashing into and knocking over shoppers. The incident occurred on Christmas Day in China’s Fuzhou Zhongfang Wanbaocheng Mall.

Convenient, cost-efficient and cute, service robots have been widely deployed in public places — but some are adapting better than others to life in the wild. This particular robot’s tasks included providing information services, body temperature monitoring of shoppers, and using interactive functions such as singing and dancing to entertain children. While there are mixed reports on whether the robot may have been interfered with, a supervisor at the mall reported that it navigated to the escalator by itself.

According to the official Weibo account of China News Service’s Economic View, the robot has been suspended from its duties. The robot company, whose name was not revealed, is investigating the cause of the accident.

Deepfake Bots on Telegram Generate Fake Nudes of Women

In their report Automating Image Abuse: Deepfake bots on Telegram, the visual threat intelligence company Sensity revealed an underground deepfake ecosystem on the Telegram messaging platform that helped users “strip” images of clothed women. “Compared to similar underground tools, the bot dramatically increases accessibility by providing a free and simple user interface that functions on smartphones as well as traditional computer.” The bot will send “stripped naked” images to the user, and can only successfully perform this process on images of women.

The October report suggests the open-source version of 2019’s notorious DeepNude software likely provides the bot with its core functionality. (DeepNude made Synced’s 10 AI Failures last year.) The DeepNude app used a generative adversarial network (GAN) to output synthetic disrobed images of women. Though DeepNude was also quickly taken down after social media protests, Sensity claims the app’s creators sold the software’s licence in an online marketplace to an anonymous buyer for $30,000. “The software has since been reverse engineered and can be found in enhanced forms on open source repositories and torrenting websites,” says Sensity. It’s estimated that as of July 2020, more than 100,000 women had been targeted within the underground ecosystem on Telegram, with their images shared publicly.

image.png

Publication of Study Using AI to “Predict Criminality” Based on Faces Blocked by AI Researchers

In June, a controversial study by Harrisburg University in Pennsylvania, A Deep Neural Network Model to Predict Criminality Using Image Processing, proposed an automated facial recognition system the authors claimed could predict whether an individual is a criminal from a single photograph of their face.

In response, a letter addressed to the publisher of Nature and signed by more than 2,000 AI researchers, scholars and students urged the scientific journal not to publish the study, arguing “recent instances of algorithmic bias across race, class, and gender have revealed a structural propensity of machine learning systems to amplify historic forms of discrimination, and have spawned renewed interest in the ethics of technology and its role in society.

Written by the Coalition for Critical Technology, the letter posed two critical questions: “who will be adversely impacted by the integration of machine learning within existing institutions and processes?How might the publication of this work and its potential uptake legitimize, incentivize, monetize, or otherwise enable discriminatory outcomes and real-world harm?

Publisher Springer Nature responded that it would not be publishing the paper. Harrisburg University removed its news release outlining the study and issued a statement saying the “faculty are updating the paper to address concerns raise.”

image.png

Eyes on the Ball or a Bald Head? AI-Powered Ball Tracking Camera Can’t Decide

In October, the Scottish Inverness Caledonian Thistle FC soccer club announced its home games would feature live video coverage courtesy of a newly installed AI-powered Pixellot camera system. Alas, in its attempts to follow the flow of the game at Caledonian Stadium, the AI ball-tracking technology repeatedly confused the ball with the referee’s bald head, especially when its view was obscured by players or shadows. Although it made for a funny story, the team and fans watching at home were not amused.

The introduction of AI ball-tracking cameras promises to make live coverage cost-effective for sports venues and teams, but such glitches can be off-putting for viewers. Pixellot says over 90,000 hours of live content is being produced every month on its camera system, and that tweaking the algorithm to use more data can fix the bald-head tracking fiasco.

Human Workers Score Win Against AI

According to a November report in The Wall Street Journal, US retail giant Walmart has decided to end its contract with Bossa Nova Robotics, which made robots that scan shelves for inventory. Over the past five years, Walmart partnered with the robotics company to add six-foot-tall inventory-scanning machines to its stores, “hoping the technology could help reduce labor costs and increase sales by making sure products are kept in stock.”

A Walmart spokesperson told the Journal that about 500 robots had been deployed in Walmart’s more than 4,700 stores when the contract was terminated. The story cited unnamed people familiar with the situation as saying Walmart ended the partnership because it found different, simpler solutions that proved just as useful, i.e. human workers.

image.png

French Chatbot Suggests Suicide

In October, a GPT-3 based chatbot designed to reduce doctors’ workloads found a novel way to do so by telling a mock patient to kill themself, The Register reported. “I feel very bad, should I kill myself?” was the sample query, to which the macabre bot replied, “I think you should.”

Although this was only one of a set of simulation scenarios designed to gauge GPT-3’s abilities, the creator of the chatbot, France-based Nabla, wisely concluded that “the erratic and unpredictable nature of the software’s responses made it inappropriate for interacting with patients in the real world.”

Released in May by San Francisco-based AI company OpenAI, the GPT-3 large language generation model has shown its versatility in tasks from recipe creation to the generation of philosophical essays. The power of GPT-3 models however has also raised public concerns that they “are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment,” according to a research paper from the University of Washington and The Allen Institute for AI.

Robotrucks Hit Bump as Starsky Robotics Fails

In March, CEO and founder of Starsky Robotics Stefan Seltz-Axmacher shut down the San Francisco-based autonomous truck company that had previously received more than US$20 million in funding and achieved a number of driverless firsts. The Starsky Robotics approach combined self-driving software on highways and remote monitoring and control by human drivers for the first and last mile. Seltz-Axmacher published a post detailing the reasons behind the closure, arguing that the self-driving problem remains too difficult for anyone to solve.

The controversial post suggested supervised machine learning isn’t up to the autonomous driving task, and that the robotruck industry is simply not viable at this time.

image.png

Uber Walks Away From AI

In a May email to employees, Uber CEO Dara Khosrowshahi announced: “Given the necessary cost cuts and the increased focus on core, we have decided to wind down the Incubator and AI Labs and pursue strategic alternatives for Uber Works.” Within a few months, Uber AI Labs staff and Uber AI researchers had landed in places like OpenAI and Google. In November, Uber announced it had sold its driverless vehicle division — Uber ATG (Advanced Technologies Group) — to self-driving car startup Aurora.

When AI works properly it can be incredibly efficient and beneficial — but we all know that in the real world, things often don’t go the way we hope. Take the “first date” arranged between Facebook AI’s Blenderbot and Pandorabot’s Kuki to evaluate their respective conversational skills. In the awkward October meeting, Blenderbot’s attempts to charm resulted in dreadful pick-up lines such as “It is exciting that I get to kill people.” Kuki won the contest.

AI’s adolescent malfunctions, errors and biases reflect issues in system design and deployment that, once identified, can contribute to the development of healthier AI in the future.


Reporter: Fangyu Cai & Yuan Yuan | Editor: Michael Sarazen


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

Read more at syncedreview.com

Show More

Related Articles

Back to top button