Corporations are embracing the shift towards diversity and inclusion owing to the constant pressure from their consumers/users and activists. However, these well-intentioned pursuits may essentially contradict their core logic. Built on existing data and trends, algorithms could generate biased results that are against certain groups. Research also shows that the use of algorithms on social media can become a catalyst for mental health problems, making people feel inadequate when constantly exposed to a colossal amount of unrealistic beauty standards. In that sense, how plausible is it for corporations and us to create a truly open and healthy community?
What are the trends?
It is a visibly growing trend that organizations endeavour to promote equity and inclusion within the workplace and communities as people are demanding more say over the organizational bursa escort values that they can identify with. According to Management consulting firm, McKinsey, companies which demonstrate a higher level of gender and cultural diversity are likely to outperform the less diverse peers on profitability.
Likewise, on social media, firms are rolling out gender pronouns options for their users aimed at creating more inclusive cultures and supporting underrepresented minorities. For example, Twitter recently started to allow their users to add their gender pronouns to their profiles, following other social media giants including Instagram, following the professional networking site LinkedIn. The video streaming platform Twitch also enabled their streamers to add 350 tags pertaining to gender, sexual orientation, race, nationality, ability, mental health, and more.
Algorithmic biases
Predictably, such endeavours to promote diversity and inclusion are generally well-received by the public. However, the problem is, the algorithms or the artificial intelligence (AI) adopted by many companies, can produce prejudices that are fundamentally against inclusion and diversity. According to a study, AI can perpetuate the sexist and racist bias against black people and women.
The algorithm tended to show bias against female candidates as the system model was built upon the past records of the candidates, who are mostly men.
Machine learning programs, which are increasingly used by recruiters to evaluate the candidates’ resumes can be problematic – Seemingly convenient and efficient, these programs could make judgments solely based on gender and racial stereotypes without people realizing it. For instance, Amazon used AI to rate their job candidates based on their resumes but later found out the algorithm tended to show bias against female candidates as the system model was built upon the past records of the candidates, who are mostly men.
These examples abound on social media too. According to an LGBTQ media watchdog group, GLAAD, the revenue-driven algorithms used by social media companies can perpetuate the malign effect of anti-LGBTQ misinformation, hate and threats of violence. As a result, LGBTQ users tend to experience harassment and hate speech on social media platforms (including Facebook, Twitter, Instagram, TikTok and Youtube).
In their report, they call on social media companies to moderate extremists and optimize their algorithms by ways such as employing qualified human moderators to assess the use of LGBTQ terms, accounts and posters from trolls and bad actors and using “the introduction of viral circuit-breakers, fact-check panels, labelling of posts, scan and suggest technology, and limiting auto-play of videos”.
Algorithms on social media also do harm to people’s mental health through the constant exposure to unrealistic beauty standards – which we are all too familiar with these days, especially during the pandemic. Many studies reveal an association between negative body image and social media activities such as scrolling through Instagram feeds.
What could be done?
For corporations, it is always a challenge to keep a balance, either between freedom of speech and taking action on potentially harmful and biased contents (as the vice president of analytics for Facebook once said in response to the GLAAD report) or between profit and minority rights. In any case, we are indeed witnessing some sort of alignment, albeit rather tiny, between the values held by some corporations and society. As customers, users and employees, it is imperative to leverage our rights to express our beliefs, and most importantly, be critical of each result we get from the algorithm for potential bias. Algorithms are operated based on what it collects from the trends or existing data, but not our humans. After all, it is us, humans who have the say over what the world should be like.
Cover: Alex Iby
Edited by: Andrada Pop