Increasingly, organizations across many industries and geographies are building and deploying machine learning models and incorporating artificial intelligence into a variety of their different products and offerings. However, as they put AI capabilities into systems that we interact with on a daily basis, it becomes increasingly important to make sure these systems are behaving in a way that’s beneficial to the public. When creating AI systems organizations should also consider the ethical and moral implications to make sure that AI is being created for good intentions.
Policymakers that want to understand and leverage AI’s potential and impact need to take a holistic view of the issues. This includes things like intentions behind AI systems, as well as potential unintended consequences and actions of AI systems. This can especially be the case with military applications.
Participating in an upcoming panel at the Data for AI Conference, Branka Panic, Founder and Executive Director at AI for Peace shares why she started an organization focused on making sure that AI provides continuously positive benefits. She provides details about the AI for Good movement, challenges companies may need to overcome as they approach AI for good, and what organizations can do to minimize the risks of creating AI with unintended consequences. She also shares insights into the AI for Peace organization in a recent AI Today podcast.
What is AI for Peace and why was it founded?
Branka Panic: AI for Peace is an exponential think-tank and community of AI and field experts who are committed to using AI for creating lasting peace. We are based in San Francisco but operate globally. Looking at the peace and security field, we see that AI is a growing element in the military strategy of many countries, and the investments in defense and national security are increasing every year. Military uses of AI are multiple and advanced, such as autonomous systems, target recognition, threat monitoring, and situational awareness tools. On the other hand, utilizing AI in peacebuilding is limited. We want to change this.
Our vision is a future in which AI benefits peace, security, and sustainable development and where diverse voices influence the creation of AI and related technologies. We outfit peacebuilders and AI experts with the mindset and knowledge to develop human-centered artificial intelligence, ensuring the creation of sustainable positive peace. We serve as a global open hub for social scientists, AI researchers, developers, and policymakers who want to understand and leverage AI’s potential and impact.
Why is it important to be thinking now about the use of AI for peaceful reasons?
Branka Panic: A whole spectrum of exponential technologies, including AI, is rapidly accelerating and shaping all aspects of our lives throughout the world. The speed of AI development and its transformative power makes it possible to tackle some of the world’s biggest challenges in new transformative ways. 420 million children, nearly one in five globally, live in areas affected by conflict. At the end of 2018, 70.8 million people were displaced due to war, violence, persecution, famine, and natural disasters, and this number is increasing every year. Despite mass protests in every region, the world suffers continuous deterioration in political rights and civil liberties. Conflicts and crises that emerged in the past decade have begun to decrease, but only to be replaced with new tensions and uncertainties as a result of the COVID-19 pandemic. As the world faces an upward trend in conflict, violence, insecurity, and human rights violations, it is our moral imperative to think about all approaches to solving these problems urgently. New technologies are now being widely explored as a potential force to reverse these trends and help create peaceful and just societies. On the other side, many of the security threats we are facing today are directly or indirectly caused by the use of these new technologies, such as autonomous weapons, biased algorithms, and facial recognition in policing. AI is a promise, but it also comes with lots of perils we need to face to sustain peace. Hence, AI for Peace works to safeguard peace both from and with AI and related technologies.
How have you seen different countries approach AI from an ethical perspective?
Branka Panic: In the past several years, various governments and organizations started adopting different sets of AI principles and regulating AI. As of 2019, over 84 AI guidelines or ethical principles were adopted. They mostly cluster around principles of transparency, justice, fairness, non-maleficence, responsibility, and privacy. Despite the fact that the impact of AI is global, proposals on how to ethically govern this technology are predominantly coming from developed countries in the world, with an underrepresentation of Africa, South and Central America, and Central Asia. The overwhelming majority of proposals come from the US or the European Union. What we need is an equal AI ethics debate with respect to global and local traditions and cultural pluralism.
What are some of the challenges of using AI for good?
Branka Panic: In general, challenges related to AI for good are quite similar to those for more commercial AI uses. For example, incomplete and biased datasets leading to biased AI conclusions, or algorithms that reflect or reinforce gender, racial, religious, and ideological biases. There are some general and context-specific data challenges, as data accessibility, data quality, volume, and labeling. Most organizations behind the AI for Good initiatives are predominantly based in the Global North, likely because of the concentration of the AI talent, so one of the challenges is connecting AI expertise with communities and problems around the world, as well as investing in talent in the Global South. Another big challenge of using AI for good is the lack of trust in AI-enabled solutions. Discussions around autonomous weapons and killer robots, job automation and loss of employment, drones, and surveillance are all important but threaten to destroy trust in any AI-powered solution to problems.
What are some positive examples of AI for good?
Branka Panic: AI can be an immense helping tool in augmenting human capabilities to tackle some of the world’s greatest challenges. A recent study on the role of AI in achieving the Sustainable Development Goals shows that AI could enable the accomplishment of 134 targets out of the total 169 under the 17 SDGs across society, economy, and the environment. As progress towards achieving the targets until 2030 is too slow and insufficient, new technologies are being recognized as an amplifier of our capacities to solve complex challenges. Computer vision and natural language processing are especially applicable to a wide range of challenges in ending poverty, reducing inequality, protecting the planet, creating clean and smart cities, and ensuring that all people enjoy peace and prosperity.
At AI for Peace, we are exploring the potential of these technologies to revolutionize the way we transform conflicts and sustain peace. We often hear about AI in terms of the traditional notion of national security, where AI investments predominately go towards the military. Our goal is to switch emphasis from national to human security and increase applications of AI for peace. Some of the promising approaches are natural language processing and machine learning used for hate speech monitoring in places where the potential for conflict is high; machine learning for conflict early warning and prevention; AI for fighting modern slavery; AI and NLP for conflict mediation and peacemaking, evaluating public acceptance of peace agreements, or combining NLP and ML to process local languages and dialects and allow all diverse demographics to be heard and even consulted in real-time. Another positive example and growing field is utilizing photos, videos, satellite and drone imagery combined with computer vision tools and deep learning for human rights protection, helping human activists to process contents quicker and see patterns not visible to the human eye, to be able to utilize this content for requesting accountability for human rights violations.
How can enterprises and companies engage in the “AI for Good” movement?
Branka Panic: Both private and public sector actors have an essential role in ensuring that AI can achieve its potential for social good. As Cognylitica Global AI Adoption Trends & Forecast 2020 Report shows, almost 90% of those who responded said that they will have some sort of in-progress AI implementation within the next 2 years. However, many organizations that shape their work around solving the biggest problems for humanity do not have financial, technical, or operational capacities to design and implement AI in a way many of those 90% interviewed companies do. So, it’s crucial to make connections between innovators in companies and problem solvers in the field. Governments and companies could grant greater access to data to organizations tackling global challenges, open accessible education opportunities, encourage and support their highly skilled employees and experts to support AI for good projects.
What can organizations do to minimize the risks of creating AI with unintended consequences?
Branka Panic: For humanitarian and peacebuilding field thinking of unintended consequences is embedded in our everyday work through the “do no harm approach”. The pandemic crisis scaled this thinking globally across different fields and industries and demonstrated that we need ethical AI minimizing the risks more than ever. Even in the ideal situation of accurate systems and non-biased algorithms, complex social contexts can cause unintended and unexpected consequences. Adopting ethical principles is only a starting point. Clearly defined values and principles to be followed is critical to bring them from theory to practice. Apart from adopting and following ethical guidelines, companies need to prioritize human rights first in applying those standards. Any actor designing, developing, or implementing AI needs to prioritize safety over speed, ensured, and controlled by independent observers. AI needs to be explainable, if it causes harm it must be capable to report what went wrong. The process needs to be developed and implemented prior to an incident, to allow public accountability and mitigate damage. Acknowledging and carefully evaluating the social impact of an AI system needs to become a norm and core part of AI rather than an afterthought.
How are worldwide data regulations impacting AI?
Branka Panic: Together with talent, research, adoption, and hardware, data regulations will have a huge impact on deciding the future of AI and global AI advantage of specific countries. Looking at the three major players in AI development, the United States, the European Union, and China, we see how different approaches to data and data privacy regulation can potentially determine the development of AI centered around for the benefit of companies, citizens, or a country.
What are some considerations for ethical guidelines for AI?
Branka Panic: The current AI boom led to various ethical guidelines and principles to be developed and adopted. Unfortunately, the research is showing that those ethical guidelines most of the time do not have an actual impact in practice. A report published by New York University’s AI Now Institute in December 2019 shows that the “vast majority” of AI ethics statements say “very little about implementation, accountability, or how such ethics would be measured and enforced in practice.” To change this, organizations across different sectors have to complement these abstract ethical values and principles with concrete steps of bringing them to practice. As a first step we need to explain what does explainability or transparency of AI system mean in practice, how does a human-centered AI system look like, who is the “human” in the “human in the loop” concept. Another advice given by experts in this area is advancing AI ethics by transforming it to “AI microethics”, acknowledging AI as a collective term for a wide range of technologies, and recommending a switch from AI ethics to “technology ethics, machine ethics, computer ethics, information ethics, and data ethics”.
What are some ways for organizations to advance their ethical AI programs?
Branka Panic: Together with a vast number of principles and ethical guidelines, there are some actionable ways organizations can implement their ethical AI programs, ensuring transparency, accountability, and fairness. For example, Microsoft’s AI, Ethics and Effects in Engineering and Research (AETHER) Committee, provides a mechanism for employees to flag concerns and give recommendations timely. IBM Research launched its AI Explainability 360 toolkit, an open collection of algorithms that use a range of techniques to explain AI model decision-making. The OECD Policy Observatory was launched this year to help convert principles into practice for all OECD member states. Not every approach can be a model for other organizations and actors to implement. NeurIPS, the world’s largest AI research conference requires authors to address the impact on society and any financial conflict of interest. AI researchers from organizations like Google and OpenAI recommended the implementation of “bias bounties”, to enable turning principles to practice through third-party auditing. OpenAI demonstrated the protection of safety over speed principle when deciding to release its GPT-2 in stages, allowing enough time to consider ethical implications before fully releasing a model.
What AI technologies are you most looking forward to in the coming years?
Branka Panic: Let me answer this question in a wider context of a global crisis we are currently going through. I believe the pandemic will become a sort of protracted crisis for the entire world, bringing many health, economic, and social challenges. I expect to see new utilizations of AI technologies in tackling these challenges, while at the same time I work, through AI for Peace, towards safeguarding from malicious and unintended consequences. I am looking forward to seeing how ML and NLP can help us process vast amounts of data and knowledge already produced in this pandemic and previous crises and similar humanitarian emergencies. I hope the pandemic will lead to more global cooperation and coordination in AI and beyond. In that regard, I am looking at the potential of newly launched alliance CAIAC – Collective and Augmented Intelligence Against COVID-19, helping decision-makers make sense of the overwhelming amount of information and uncertainty surrounding COVID-19 and its effects, to make better decisions faster. How we use AI in this crisis will have a long-lasting influence on trust and public attitudes and will have serious impacts on all other applications across different sectors.
Originally published on Forbes.com