In the past few years, Artificial Intelligence (AI) has seen note-worthy growth, that inherently attracted investments both from the government and private investors. Many leading research areas in Machine Learning, a sub-section of AI that focuses on building machines that learn through repeated engagement with thousands of datasets, have been consistently showing unprecedented results. (Hao) The innovations that come out of Machine Learning have direct implications in our lives and have shown many practical applications. Fortunately, those applications like natural language processing and speech recognition which were primarily intended for commercial use, have shown profound influence in the social and community sphere.

Change is inevitable and nowhere is it most manifested than it is in the exponential growth of technology. While the benefits and conveniences of having technology have substantially taken over our lives, so have all its complications (Belfield 15). The government’s slow and bureaucratic ways can’t seem to keep up with the ever-changing tech environment. (Milano) The bigger our problems get, the more effort they require, not only from a single country but from the coordinated effort of the entire world. This article argues that Twitter has emerged as a platform where discussions about projects, awareness creation about technology concerns, and meaningful interactions for the betterment of society occur. We will look at projects that have brought about influence to communities, companies that are based around the principle of diversity in technology, and hackathons that serve as community-building events In each section, we will discuss how Twitter played a role. The surprising success of Machine Learning and Artificial Intelligence has attracted many activists and actors both for-profit companies and non-governmental organizations for social good. Data Science for Good, UN Global Pulse Labs, Microsoft AI for Humanity, Google AI for Good, and many others have joined the movement to see AI being implemented to create safe and hospitable environments for human beings to live in. The Sustainable Development Goals (SDGs) are key to driving the coordinated efforts of most of these organizations and it continues to play a big role for technology advancement towards social development in this field to yield the highest impact.

Terminology

A Cambridge University researcher Haydn Belfield who focuses on international security, Emerging Technology, and AI governance describes AI community as follows:

The ‘AI community’ includes researchers, research engineers, faculty, graduate students, NGO workers, campaigners and some technology workers more generally — those who would self describe as working ‘on’, ‘with’ and ‘in’ AI and those analysing or campaigning on the effects of AI. This paper focuses especially on the AI community within corporate and academic labs in the US and Europe.

This community has recently popped up in almost every sphere of work. Belfield talks about how AI has penetrated the social media sphere demanding reformation in the dissemination of information.

“The AI community has made use of institutional enabling structures: the Future of Life Institute as a coordinator of international Open Letters; the Tech Workers Coalition and the Partnership on AI as distributors of best practice; corporate digital tools such as email mailing lists and internal chat rooms; and social media such as Twitter and Medium as a way of communicating demands.”(Belfield 20)

Different governmental and non-governmental institutions have shown commitment to tackling the social issues that plague the community they serve. Among the many actors and movements, AI for Social Good has implemented projects that concern the study of activism in AI. The next few sections will explore those projects and their impact on society.

Troll Patrol

With the advent of social media and the creation of platforms that encourage free speech, it has become increasingly hard to maintain equality among genders in many cases. Women are especially impacted by this new social problem in technology. Women, as a very important part of our democracy and economy, deserve to share their opinions freely and without fear of any conflict or abuse. Amnesty International was the one to take charge of this issue. It partnered with Element AI, the former name for the new AI for Social Good, to seek and analyze public data on Twitter using computational statistics and machine learning. This project was aiming to identify abuse of women politicians on Twitter through crowdsourcing and careful labeling done by hundreds of volunteers. The study ended up producing a shocking result estimating 1.1 million toxic tweets being sent to women in the study across the year, black women being 84% more likely than white women to experience abuse on the platform.

This use indicates the deeply entangled effect of AI in the social scene. Humanitarian organizations reaching out to AI teams to cultivate a powerful relationship that aspires to bring about change through activism.

Thousands of Twitter users were following this development and sharing their surprise and disgust via the #ToxicTwitter movement. While some claimed that they have expected this, some were completely shocked by the discovery. This movement gathered enough followers and was eventually able to urge Twitter to begin effecting change. The following video shows the development that came about after Twitter promised to make changes to its platform. It is reflected upon by the women who were victims of this abuse.

Shaqodoon

Activism is no more reflected than it is in this project. Shaqodoon is an NGO that specializes in receiving citizens’ feedback through Artificial Intelligence technology. Citizens, for whom governments and social organizations work, have a pivotal role in determining where these organizations should focus their efforts and keeping them accountable. As the population continues to grow, it has become increasingly crucial for governments to find ways to make every single voice get heard.

The one important distinguishing factor that Shaqodoon’s software has that other feedback software lack is the ability for users to literally voice their concerns and feedback directly into the app in an easy manner. The voice recording is then directly transcribed using Machine Learning’s natural language processing technology. This makes it easy for all citizens (in Somalia where Shaqodoon is based) to give feedback on various developments happening in the country, including infrastructure and health care projects

While this project seemed promising and had high expectations, it didn’t follow through because the voice recordings Shaqodoon received were not able to meet the high-quality requirements needed for transcription. Out of the 80,000 expected voice recordings to be transcribed only 72 of them were good enough to go through the process in the end. Nonetheless, this project was a true embodiment of AI community’s devotion to activism.

Deep Learning Indaba

Deep Learning Indaba is a company based in Africa that seeks to assimilate the African community into the advancement of AI. By helping Africans establish a foundation of AI development and progress, Deep Learning Indaba hopes to tackle two of the Sustainable Development goals set out by the United Nations: Strengthen the means of implementation and revitalize the global partnership for sustainable development and Strengthen the means of implementation and revitalize the global partnership for sustainable development.

Africans of various positions in Deep Learning Indaba and beyond contributed to the shift in perspective African communities have towards AI by empowering their innovation and sharpening their mindset. This organization has been the bridge for AI technology into Africa; it supported the growth of different research groups aimed at translating and employing natural language processing into various African languages. They do so by creating ample research to feed the algorithm that generates the correct models of the different languages.

One note-worthy endeavor undertook by Deep Learning Indaba in partnership with Deep Mind is the SnapShot Serengeti Challenge. This project aimed to take images of various geolocations throughout Serengeti that are meticulously time-stamped and labeled. These images are then used to develop AI solutions to analyze the migration patterns of endangered animals.

IBM Research Africa also partnered with Deep Learning Indaba to create the IMB-Zindi Malaria challenge: a challenge that uses reinforcement learning in machine learning to combat the spread of malaria infections. Data Science Africa and Black in AI ( which will be discussed more in the coming section) are one of the many groups and organizations that partnered with this company. Needless to say, Deep Learning Indaba was among the successful AI ventures in Africa. Deep Learning Indaba continues to share the world’s wisdom of AI with Africa effectively fighting for Africans to gain a place on the world stage.

Diversity in AI

While we have seen genuinely astounding results from Artificial Intelligence systems in various sectors, AI has also received blows from the press for its lack of racial and gender sensitivity, especially in the realm of facial recognition (Daugherty). Serena Williams, Oprah Winfrey, and Michelle Obama were victims of their facial recognition racial bias. Beyond this incident, AI technologies were also accused of favoring one gender and race over the other when it came to job employment and loan considerations.

Experts have the responsibility to use their expertise to look for and extinguish subtle biases in the design of these AI machines that result in big disparities. The goal can be realized by adding marginalized communities and minority groups to the playground and endorsing some of their perspectives and skills. This would help with the inclusion and diversity that is lacking in the new era of AI communities and mitigate the tensions that arise with it.

To promote this idea of change, Groups like Girls Who Code, AI4ALL, and Black in AI have been founded. Girls Who Code has spread across all states and has even established a presence overseas, especially in third-world countries. It has more than 90,000 members in the United States alone. The company AI4ALL is a summer program that specifically targets girls in minority communities.

Black in AI was cofounded by Temnit Gebru and Rediet Abebe. Both Ethiopian by birth, they observed a huge gap between the percentage of white technologist and black technologists in the field of Machine Learning and Artificial Intelligence. Timnit was specifically in the position to inquire about this phenomenon as she was working for Microsoft’s Fairness, Accountability, Transparency, and Ethics in AI group trying to tackle the issue of bias in AI technologies. Both Rediet and Timnit emphasize that diversity should not just remain in the data set as a solution to bias. Researchers should include individuals from a minority group who have a social sense of what should be incorporated as a feature and should be removed as a bug from these technologies.

It’s quite easy to overlook the figures that show the performance of these AI technologies. They are right the majority of the time, but when they are wrong they are specifically attacking minorities that weren’t equally represented in the development and testing of these AI products. Minorities are always losing. While it’s impossible to have a model that perfectly encapsulates the entire world, researchers and AI developers can easily subdivide datasets by race and gender and test and develop them in these subgroups to mitigate the problem of bias.

To further strengthen her cause, Temnit has been actively looking for devotees on social media platforms like Twitter to join Black in AI. “When I started Black in AI, I started it with a couple of my friends. I had a tiny mailing list before that where I literally would add any black person I saw in this field into the mailing list and be like, ‘Hi, I’m Timnit. I’m black person number two. Hi, black person number one. Let’s be friends.’” (Snow)

Hackathons and Competitions

Another interesting sector of activism in AI is through the youthful force that promises to shift AI in the right direction. Hackathons are 24-hour to 72-hour coding sessions that happen during weekends and they aim to create software solutions to various social, political, and economic problems from scratch. (Saed) While they are primarily intended as a fun and learning activity, hackathons have evolved to become an event where software developers and problem-solvers actively try to come up with practical and innovative ways to solve social issues.

Some note-worthy hackathons that happened in 2019 include AirFreeN’Free, a website fighting the housing crisis in the San Francisco Bay Area, Know Your Rights, a website that creates awareness of rights when stopped by law enforcement officers. One other is Equal Income, a website that highlights the pay gap between gender. These projects were among the winners of the competition and helped immensely in showing how technology can be used to spread awareness.

The Social Impact Challenge at Bosch is an event held primarily with the theme of social change. It is especially powerful because it brings us to the last important dynamic of Twitter: connecting different individuals from different backgrounds to help craft the building blocks of the future of technology. @BoschGlobal is the official Twitter page and it uses the platform to strategically attract product designers, software engineers and entrepreneurs that are willing to help code the future. They use hashtags like #technology #climatechange #IoT #FightRacism to show up in relevant places and encourage discussions technology has on social life.

Conclusion

We know that technology isn’t inherently good or bad. I believe software tech has tremendous potential to add value to be world. But it’s ever more clear to me that a huge part of the current boom is driven by enabling systems meant to manipulate, control and defraud people. — Marco Roger

The AI community is understands this issue and is now mobilized more than ever. It was able to cross some of the most important milestones. AI has seen some critical success in ethics and safety and employee organizing. However, because of the nature of AI and because no one can accurately determine the trajectory of this technology, the future success of the AI community is uncertain. (Walch) “Ethicists warn about AI’s lack of moral sensitivity, empathy, and appreciation for human rights.” (Siebecker 98) That being said, the community will remain vigilant in using social media platforms, especially Twitter, to continue discussions on the ethics and safety of AI and the create awareness and connections between independent researchers and engineers.

Works Cited

Belfield, Haydn. “Activism by the AI Community: Analysing Recent Achievements and Future Prospects.” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, 2020, pp. 15–21. ACM Digital Library, doi:10.1145/3375627.3375814

Daugherty, Paul R., et al. “Using Artificial Intelligence to Promote Diversity.” MIT Sloan Management Review, vol. 60, no. 2, SLOAN MANAGEMENT REVIEW ASSOC, MIT SLOAN SCHOOL MANAGEMENT, 2019, pp. 10–12.

Hao, Karen. “What Is Machine Learning?” MIT Technology Review, MIT Technology Review, 5 Apr. 2021, www.technologyreview.com/2018/11/17/103781/what-is-machine-learning-we-drew-you-another-flowchart/.

Milano, Brett. “Government Can’t Keep up with Technology’s Growth.” Harvard Gazette, Harvard Gazette, 8 Feb. 2019, news.harvard.edu/gazette/story/2019/02/government-cant-keep-up-with-technologys-growth/.

Roger, Marco“Https://Twitter.Com/Polotek/Status/1384199258803302402.” Twitter, https://twitter.com/polotek/status/1384199258803302402. Accessed 14 May 2021.

Saed, Omnia. “Code Next Students Merge Computer Science and Activism.” Google, Google, 5 Aug. 2019, www.blog.google/outreach-initiatives/education/code-next-hackathon/.

Siebecker, Michael R. “Making Corporations More Humane through Artificial Intelligence.” Journal of Corporation Law, vol. 45, no. 1, 2020, pp. 95–149. ProQuest

Snow, Jackie. “‘We’re in a Diversity Crisis’: Cofounder of Black in AI on What’s Poisoning Algorithms in Our Lives.” MIT Technology Review, https://www.technologyreview.com/2018/02/14/145462/were-in-a-diversity-crisis-black-in-ais-founder-on-whats-poisoning-the-algorithms-in-our/. Accessed 13 May 2021.

Tomašev, Nenad, et al. “AI for Social Good: Unlocking the Opportunity for Positive Impact.” Nature Communications, vol. 11, no. 1, May 2020, p. 2468. www.nature.com, doi:10.1038/s41467–020–15871-z.

Walch, Kathleen. “Ethical Concerns of AI.” Forbes, Forbes Magazine, 28 Dec. 2019, www.forbes.com/sites/cognitiveworld/2020/12/29/ethical-concerns-of-ai/?sh=6bf867b523a8.