By Nicolas Miailhe, Founder & President, The Future Society
Harnessing responsible AI for sustainable development
On June 11, UN Secretary-General Antonio Guterres released his Roadmap on Digital Cooperation. The roadmap builds upon the recommendations outlined in the report from the High-Level Panel Guterres had appointed last year co-Chaired by Melinda Gates and Jack Ma. At a time when the crisis of multilateralism has become ever more manifest with President Trump withdrawing the U.S. from the World Health Organization in the middle of the Covid-19 global crisis as a result of rising geo-strategic tensions with China, this roadmap represents an important milestone. It aims to reconcile two urgent imperatives: the so-called “2030 agenda”, centered around the seventeen Sustainable Development Goals (SDGs) and the Paris Agreement on climate action on the one hand; and the need to innovate new global and social contracts to govern the rise of Artificial Intelligence (AI) and other digital technologies responsibly on the other hand.
The COVID-19 health and economic crisis has shed new light on the high stakes mobilized by the inextricable connection between AI’s vast promise in terms of productivity, sustainability, and prosperity, and its daunting perils such as growing divides, and dangerous assaults on human rights including privacy, free will, and self-determination. For example, the use of AI to analyse digital contact tracing data could help communities gradually and safely lift lockdown measures, enabling millions of workers who lost their jobs due to the pandemic to get back to work. As 305 million full-time equivalent jobs have been lost in the second quarter of 2020 (1), this technological opportunity could be a welcome socio-economic relief.
However, citizens and experts alike are concerned about contact tracing applications’ reliability and potential for abuse and privacy infringement (2). Beyond the tension between socio-economic gains and preservation of fundamental rights, AI-empowered contact tracing spotlights intergenerational tradeoffs. Indeed, younger citizens have the least to fear from contamination in case these apps are not effective (3), the most to gain from using contact-tracing applications given their intensive use and familiarity with smartphones, but the most to lose economically from prolonged lockdown measures. (4) Societal trade-offs like these show how the promises and perils of AI can be unevenly distributed.
Over the past decade, the rise of the digital economy has brewed into a risky tale of oligopolistic concentration of wealth and power in the hands of fewer and fewer private multinational actors. As we enter the age of AI, these well-oiled digital value chains give these tech giants a definitive advantage in accessing and controlling data, the necessary fuel to train machine learning models; platforms’ critical mass also enables them to continuously invest in the development and mobilization of scalable computing power, the crucial infrastructure to store and process data and enable learning algorithms. This further entrenches tech giants’ dominant positions and leads to a widening ‘AI Divide’ between those who design, develop and deploy AI, and those who do not.
Figure 1: The convergence of software, data and hardware lead the AI revolution
Source : The Future Society
The responsibility of philanthropy to square up digital and ecological transitions
In this context, squaring up the digital and environmental transitions in ways that do not sacrifice but rather uphold the best from our social, liberal and democratic values is certainly one of the biggest challenges of the 21st century; both in principle and in practice. At a time when inequalities between and within countries are soaring like never before, philanthropy has both a moral obligation and an instrumental role to play to reconcile AI and sustainability.
And philanthropy can play a big role to help guide research, development, and adoption at scale of groundbreaking AI technologies, systems, and associate business models towards reduced carbon emission, biodiversity conservation, more equally shared prosperity, ethics, empowerment, self-determination, and long term safety. Though the clock is ticking, data and AI business models & value chains are not yet mature; they can still and must be shaped to maximize desired outcomes and rein-in toxic effects. Governments will naturally have a central role to play through policy and regulatory frameworks. But governments won’t be able to do it alone for several reasons: the deep crisis of multilateralism severely hampers international cooperation efforts which are central, given the global nature of the AI revolution, COVID-19 and sustainability crisis; political leaders and large technocracies still lack the required ecological, digital, data & AI culture to create smart governance mechanisms; and they are still largely imprisoned in 19th-century mindsets and 20th-century processes which revolve around one-size-fits-all instruments.
Updating the “software” of governance to square up the digital and ecological revolutions will require a broader collective intelligence & action effort. It will require giving space to agents of change that build innovative hybrid approaches and public-private-people partnerships (P4) that reconcile problems of the ‘end of the world’ with those of the ‘end of the month’, and those of the ‘end of the day’.
Philanthropy should organize itself to increase support and funding to such agents of change positioned to help seed and/or scale innovative hybrid models, innovative P4 solutions which are too disruptive/subversive for governments to fund and support via traditional instruments (grants, tax credit, public procurement), and too risky for financial markets to back viably. As a necessary infrastructure for these to flourish, philanthropy should also get behind the actors who work to build the required “cocktails” of self, soft, and hard regulatory frameworks.
Way forward and examples
Areas of focus should include:
- Internet Governance architecture and processes to reconcile self-determination, global governance, and frugality
- New-generation data-sharing infrastructure, frameworks and protocols (e.g. data exchanges, data trusts and data collaboratives) focused on delivering and monitoring the delivery of the SDGs
- Hybrid capitalistic models for research laboratories developing Autonomous & Intelligent (A/I) Systems to reconcile public-private innovation and shared prosperity
- International Alliances for Fundamental & Applied Research in A/I Systems to prevent dangerous race dynamics between global powers
- Independent Audit & Certification of A/I Systems, and explainability of deep artificial neural networks (including through the use of ‘distributed ledger technologies’) to help build the infrastructure of trust central to the adoption of AI technologies
- Ethical Digital Contact Tracing in the fight against COVID-19 to help break down the chain of contagion of COVID-19 while upholding individual liberties
- Augmented aid-to-decision tools for better international coordination in the fight against COVID-19 in a time of growing flows and stocks of data and heightened tension between great powers
And this has already begun (5): OpenAI is a good example of an interesting experiment to innovate hybrid capitalistic models to fund fundamental and applied scientific research in advanced AI in a collaborative way. OpenAI was originally founded in San Francisco in 2015 by Elon Musk, Sam Altman and others who pledged US$ 1 billion to create a non-profit research laboratory that would attract the best AI scientific talents, collaborate with other institutions, and make its research and patents open to the public. In 2019, to secure sustained additional funding into large-scale AI supercomputing infrastructure and talent, OpenAI created a for-profit wholly-owned subsidiary governed as a “capped-profit” limited partnership – where profits emerging from the LP in excess of the 100x multiplier will go to the non-profit- which performs the research and commercializes products like its GPT-3 natural language processing model. This move resulted in Microsoft investing US$ 1 billion in OpenAI LP making it an important competitor to Google DeepMind. Some have criticized OpenAI for its lack of sincerity and transparency in making and administering this key governance move. Philanthropy would gain a lot from studying this case to learn from its successes and shortcomings.
Other promising examples include Project GDAF (Global Data Access Framework) and Project CAIAC (Collective & Augmented Intelligence Against Covid-19) we, The Future Society (TFS) have been leading since 2019. Both projects are very much open to philanthropic actors willing to get involved. The Global Data Access Framework (GDAF) was launched in cooperation with UN Global Pulse, McKinsey Noble Purpose AI, and the AI Commons partnership to help address a major barrier in the application of AI for SDGs: access to quality data. Recognized in the UN Secretary-General Roadmap on Digital Cooperation as an innovative initiative to advance the creation of “digital common goods” GDAF aims to innovate, test, deploy, scale and hopefully standardize new data sharing architectures, frameworks and protocols which are technically feasible, operationally, commercially and legally viable, as well as ethically aligned and privacy-preserving. This includes “data trusts” and standardized API protocols. GDAF brings together a curated global community of leading practitioners and experts from corporations, governments, international organizations, academia, civil society, and philanthropy around structured cycles of action-oriented research and testing.
Firmly anchored in GDAF, the Collective & Augmented Intelligence platform Against Covid-19 (CAIAC) is a decision-making support platform designed to dynamically map and advance our common knowledge of Covid-19 and its cascading effects. The global health, economic and social crisis of Covid-19 has mobilized unprecedented resources at national and global levels. Hundreds, if not thousands, of bottom-up and country-level initiatives are being launched in an effort to address the issue. However, the level of data and knowledge being generated is outstripping our ability to process it. Furthermore, there is minimal sharing of knowledge and best practice across these initiatives. There is no central locus to coordinate a global response. CAIAC’s motto thus is: better decisions faster. Launched a few weeks ago during the UN High-Level Political Forum, CAIAC brings together a global alliance led by The Future Society, Stability.ai, and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) – with the generous support of the Patrick McGovern Foundation- with additional representation from UN Global Pulse, UNESCO, the World Bank, and the World Health Organisation. The CAIAC platform currently being developed will present comprehensive, authoritative, and up-to-date insights and solutions to funders, decision-makers along the full intervention lifecycle holistically across health, social and economic issues and impacts. For its first phase, CAIAC has engaged multilateral stakeholders to identify three core areas (use cases) in which decision-makers urgently need authoritative, accurate knowledge to inform their decisions and take action:
- Tracking /tracing of contagion chains via mobility data and AI
- Addressing the ‘infodemic’ of false information about Covid19
- Targeting support for marginalised groups suffering the second and third-order pandemic impacts.
Delivering the UN Secretary-General Roadmap on digital cooperation over the next decade is a crucial imperative to reconcile digitalization, prosperity, peace, and sustainability. This will require seeding & scaling new and innovative architectures, models, partnerships and social contracts. Philanthropy has a crucial role to take a lead in helping individuals and societies navigate this transition.
This article is part of the WINGSForum Imagine Series, focusing on the theme and sub-themes of technology, power and economy.
(1) ILO Monitor (May 27 2020), COVID-19 and the world of work. Fourth edition. Updated estimates & analysis. International Labour Organization. Retrieved from:
https://www.ilo.org/wcmsp5/groups/public/@dgreports/@dcomm/documents/briefingnote/wcms_745963.pdf
(2) Soltani, A., Calo, R., & Bergstrom, C. (April 27 2020) Contract-tracing apps are not a solution to the COVID-19 crisis. The Brookings Institution, Tech Stream. Retrieved from https://www.brookings.edu/techstream/inaccurate-and-insecure-why-contact-tracing-apps-could-be-a-disaster/
(3) Center for Disease Control (June 25 2020) Coronavirus Disease 2019 (COVID-19) – Older Adults. Retrieved from https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/older-adults.html
(4) ILO Monitor (May 27 2020), COVID-19 and the world of work. Fourth edition. Updated estimates & analysis. International Labour Organization. Retrieved from:
https://www.ilo.org/wcmsp5/groups/public/@dgreports/@dcomm/documents/briefingnote/wcms_745963.pdf
(5) See also MONKS Joost and SELLEN Charles, Artificial Intelligence and intelligence of the heart: Opportunities and risks in a post-COVID world, June 2020. https://www.alliancemagazine.org/analysis/artificial-intelligence-opportunities-risks-philanthropy-post-covid-world/
By Nicolas Miailhe, Founder & President, The Future Society.