Why AI for Humanity

Why “AI for Humanity”? The Future in Our Hands

Advances in artificial intelligence (AI) are set to transform the way we live together, with consequences that are equally profound and unnerving. Collectively, we are not yet ready – intellectually, philosophically, or morally – for the world we are creating. If we all begin to see what’s at stake, then we can work together towards a better understanding of the issues that are as urgent as other global challenges, such as climate change and social inclusion. Project Sayang aims to:

1. articulate the significance and urgency of AI for Humanity as an agenda for research and public deliberation; and
2. propose several initiatives, including an evolving and interdisciplinary open-source AI literacy curriculum to promote lifelong learning and public deliberation on the ethics and governance of AI, as well as dialogue and transparency by government and industry to build public trust in AI.

In the next few decades, we will develop computing systems of astonishing capability, some of which will rival and surpass humans across a wide range of functions, even without achieving human-like intelligence. Before long, these will cease to resemble computers. They will be embedded in the physical world, hidden in structures and objects that we never used to regard as technology. Increasingly more information about human beings – what we do, where we go, what we think, what we say – will be captured and recorded as data, then sorted, stored and processed digitally. In the long run, the distinctions between human and machine, online and offline, virtual and real, will fade. These AI advancements also put into question what it means to be human. Regardless whether humans can be replaced by machines, it makes us cherish what are still the unique qualities of human beings –consciousness, emotion, morality, culture, and creativity – qualities that we should enhance rather than diminish.

This AI-driven transformation brings immense benefits for human civilisation. Our lives will be enriched by new ways of working, playing, learning, creating, expressing ourselves, and finding meaning. In future, we may be able to augment our minds and bodies beyond recognition, freeing ourselves from the limitations of human biology and repetitive menial tasks. In healthcare, for example, AI promises to enhance human life in the prevention, detection, diagnosis, and treatment of disease.

Experts have suggested that Singularity – when advances in AI lead to the creation of a machine or a technological form of life that is smarter than humans – is likely to happen as early as 2030, or in 30 years’ time. Although some may argue that Singularity will never arrive, it is still worthwhile to ask “what if?” – so that we become more aware of the risks, unintended consequences, dilemmas, and threats posed by AI to humanity.

Some technologies will come to hold great power over human beings if we are not mindful. Some will be able to force us to behave a certain way, such as AI software that leads police to unfairly target certain neighbourhoods due to racial biases in its algorithm. Others will be powerful because of the  information they gather about us. Merely knowing we are being watched makes us less likely to do things perceived as wrong. Still other technologies will filter what we see of the world, prescribing what we know, shaping the way we think, influencing how we feel and thereby determining how we act. Those who control these technologies will control the rest of us.

Governments, corporations, and research institutions today are channelling increasingly more resources into AI for power, profit and ideological supremacy. Competing in AI has become a badge of prestige for nations, businesses, and universities around the world. “AI superpowers” will shape the new global order. Yet, important questions arise, such as who decides how to deploy AI, who benefits, and who are ultimately accountable and responsible for AI decisions.

Creating innovative approaches to guide the development of “human-friendly” AI is one of the biggest challenges facing the world today. Currently, lack of public trust for AI and AI-automated decision-making risks sparking a concerted backlash against AI technologies. Instead of raging against the AI machine, it is better to ensure that humans and machines are both on the same side. As the negative impact on the natural environment grows in scale, it is equally vital to broaden the definition of AI for Humanity to include AI for Earth. Taken together, this means that AI for Humanity must be a global agenda linked to the United Nations Sustainability Development Goals, especially in relation to the goals of sustainable cities and communities, reducing inequalities and climate action for the environment.

A PWC study has highlighted six short-term and long-term risks with varying impacts on individuals, organisations, society and the Earth: performance risks (eg. errors, bias), security risks (eg. privacy intrusion), control risks (eg. AI going rogue and malevolent), ethical risks (eg. lack of values), economic risks (eg. job displacement), and societal risks (eg. autonomous weapons). To cope with
these challenges, we need to approach AI governance from human- and Earth-centred ethical perspective, which other organisations have also advocated. Core principles include:

1. Human values: AI should seek to enhance, not diminish the human intellect, and value human life and safety, and human choice.
2. Sustainability: AI should be used as a tool for sustainable development.
3. Inclusiveness: The development of AI must aim to benefit all citizens, residents, communities and neighbourhoods. This requires pairing it with ongoing studies of its impact on human society.
4. Consultation: AI efforts should seek to advance citizen engagement, and ensure that policies put citizens at the forefront, and meet their needs.
5. Transparency: The public have a right to be informed of AI developments and outcomes, and information will be made accessible for everyone’s benefit. Standards and best practices in AI research and development should be developed in tandem with evidence-based, outcomes driven approaches and open for sharing with others.

Moving forward, the growing public interest in AI today offers a valuable opportunity to promote public engagement and critical discourse about the societal impact of these technologies, which are shaped by expectations and visions of stakeholders. If the public is to trust and fully embrace the full range of political, economic, social and cultural changes that AI technologies will introduce, now is the time to think about how to optimise the benefits of these technologies, while anticipating and mitigating the risks to humanity.

Project Sayang proposes the following initiatives targeted at three key stakeholders:

For universities, we propose an open-source “plug-and-play” AI-literacy curriculum that is interdisciplinary and may be adapted by any institution, with the aim of promoting a better understanding of the impact of AI on humanity and the planet. This “living” curriculum will evolve as students and the public evaluate the benefits, risks, unintended consequences and directions of AI. For more information on the proposed curriculum, as well as learning resources, please visit Project Sayang’s AI for Humanity website.

Besides seminars, participatory and experiential learning are pedagogies for this AI-literacy curriculum, to foster a growth mindset and critical thinking among participants. Through discussions on real-life issues and dilemmas, participants examine their perspectives on issues and explore alternative scenarios. Universities should also incorporate AI governance principles in their Institutional Research Board guidelines, as part of protecting human subjects involved in such research. AI-focused institutions such as NISTH, and the wider scientific community also play a crucial role in informing and educating policymakers, the public and other stakeholders about the coming challenges and dilemmas associated with AI technologies.

As with any technology, AI’s potential to help or harm us depends on how it is applied and overseen. For governments, public engagement and consultation in the ethical use of AI should be part of best practice. To foster public trust in AI, the right checks and balances need to be put in place. New approaches that more actively involve the public in setting the parameters of the discussion and framing the issues under scrutiny need to implemented. Rather than being the subjects of consultation, ordinary citizens must be able to participate in deliberation with decision-makers and other members of the community, such that citizens and residents define and shape the issues, and listen to thearguments of others, as well as express what they think.

The public also has a key role in community based participatory research (CBPR) and citizen science projects related to AI, as it will enable participants to have hands-on experience with AI technologies, which will in turn create better informed citizens who are aware of AI benefits and risks. Furthermore, CBPR acknowledges the values and priorities of the community, thus ensuring local relevance and respect for cultural diversity.

For people living in urban environments, AI technologies will provide the tools to help understand and manage the impact that a transition to a digital economy is having on the city and, more importantly, to inform the design of the built environment to make it resilient to challenges such as climate change. For example, in the future, automated and electric vehicles will allow for more efficient use of infrastructure, and safer and cleaner neighbourhoods. The decisions we are taking today need to anticipate the impact this will have on residents and the built environment if the benefits are to be fully achieved. Citizens must be able to deliberate on the impact in dialogue with government and industry.

As the industry continues to harness the potential of AI on an increasing scale, they likewise need to be cognisant of using these technologies in a responsible and ethical manner. Corporations should develop and implement internal governance around the use of these technologies. They should also be transparent in disclosing how they use AI technologies and consumers’ data.

After recovering from stage four cancer, Kai-Fu Lee, a pioneering AI leader, came to realise that it is love and relationships, and not algorithms that would sustain humanity. Instead of seeing AI as a threat to humanity, Lee advocates seeing it as an unparalleled opportunity to reorganise societies to “build our shared future: on AI’s ability to think but coupled with human beings’ ability to love.” For although AI can detect and infer human emotions, it does not have the human capacity to love.

The future of AI and humanity is in our hands, and we must do all we can to ensure that its future development benefits us and cultivate the very qualities that make us human.