The workplace has been overlooked because of the benefits of computer aided design.


Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier (Early Lobel & Failing AI)

Almost all big employers in the United States are now using artificial intelligence and automation to their hiring processes, which is creating some urgent questions for the agency that enforces federal anti- discrimination laws.

Orly Lobel: For the past decade, I’ve seen too much of a binary discussion. People in the tech industry celebrate technology for its sake while ignoring equality, justice, and fairness. Then there are people asking, “Who are the winners and losers, and how do we protect different rights?” I wanted to bridge the two conversations.

We should celebrate the successes rather than just have a tunnel vision of the problems. And people who are interested in having these conversations are getting more discouraged. People of color such as women are opting out of working for Big Tech. We have less diverse voices on the inside and people who are being critical have less skin in the game.

People assume that it’s a perfect answer. There is a risk of no one questioning automated hiring calls or accusations of harassment.

I’ve been researching hiring and diversity and inclusion for a long time. We know that so much discrimination and disparity happens without algorithmic decisionmaking. If you are implementing a hiring strategy, you should ask whether it is performing better than the human processes. And when there are biases, what are the sources, and can they be corrected, for example, by adding more training data? How much can we bias against ourselves versus how much can we improve our systems?

Some 83% of employers, including 99% of Fortune 500 companies, now use some form of automated tool as part of their hiring process, said the Equal Employment Opportunity Commission’s chair Charlotte Burrows at a hearing on Tuesday titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” part of a larger agency initiative examining how technology is used to recruit and hire people.

The majority of emotional machines are based on flawed science. The social context of the person and the situation makes it difficult for emotional AI to reduce facial expressions to an emotion. It is not always possible to deduce the reason and meaning behind the tears even when it is obvious that a person is crying. A scowling face doesn’t necessarily mean that a person is angry. Why? We all adapt our emotional displays according to our social and cultural norms, so that our expressions are not always a true reflection of our inner states. When people do emotional work, it’s usually to disguise their real emotions, and how they communicate them is likely to be a learned response. Women and men modify their emotions more than men because they have negative values associated with them, such as anger.

Tech companies are looking to make more connections with users across banking, health care and education by releasing advanced bots that mimic humans’ emotions. According to users in China, average users have conversed with Xiaoice more than 60 times in a month. The Turing test showed that the users did not know it was a bot for 10 minutes. The analysis shows that the number of interactions between health care users will reach 2.8 billion a year in the next five years. It could save $4.7 billion for health care systems around the world, and free up medical staff time.

racial inequalities can be perpetuated by facial recognition. Analysis from 400 NBA games with two popular emotion-recognition software programs, Face and Microsoft’s Face API, were shown to assign more negative emotions on average to Black players, even when they were smiling. These results reaffirm other research showing that Black men have to project more positive emotions in the workplace, because they are stereotyped as aggressive and threatening.

Take, for example, a video interview that analyzes an applicant’s speech patterns in order to determine their ability to solve problems. Someone who has a speech impediment might be screened out after scoring low.

There are gaps in the resume that job applicants are being rejected for. The robot may turn down a candidate because they had to stop working due to a disability, or because they took time off to have a child.

Heather Tinsley-Fix said during her testimony that older workers may be disadvantaged by the use of artificial intelligence.

Those with smaller digital footprints may be overlooked by companies who use technology to find ideal candidates.

When it may be buried deep inside the system, the EEOC will have to find a solution to root out discrimination. Those who have been denied employment may not connect the dots to discrimination based on their age, race or disability status.

In a lawsuit filed by the EEOC, a woman who applied for a job with a tutoring company only realized the company had set an age cutoff after she re-applied for the same job, and supplied a different birth date.

The Role of Audits in Providing Software Auditing Assistance to Improve the Security of Individuals and Families in the 21st Century

Tuesday’s panelists, a group that included computer scientists, civil rights advocates, and employment attorneys, agreed that audits are necessary to ensure that the software used by companies avoids intentional or unintentional biases. Who would conduct those audits, the government or companies themselves, is a bigger question.

Each option presents risks, Burrows pointed out. A third-party may be coopted into treating their clients leniently, while a government-led audit could potentially stifle innovation.

Setting standards for vendors and requiring companies to disclose what they’re using were some of the topics that were discussed. What those would look like in practice remains to be seen.

Artificial intelligence and algorithmic decision- making tools have to be used in a proper way in order to improve the lives of Americans.