BLOG POST

Who’s Responsible? Consider Developing Countries When Assessing the Ethical Responsibilities of Innovation

November 07, 2018

Imagine sitting on a park bench chatting with a life-sized robot as engaging as the Tin Man in The Wizard of Oz. But this is not just any chunk of metallic brain power. It is your life partner. CNN journalist Laurie Segall explored the emotions, practicalities, and even anatomy of robotic relationships on her TV series “Mostly Human” as she interviewed a Frenchwoman enjoying just such a park bench moment.

Social robots endowed with artificial intelligence (AI) are already here. Japanese robot puppy Aibo learns to respond to head pats and its owner’s voice commands. Blue Frog’s robot companion BUDDY provides social interaction as well as physical assistance two elderly people who live alone. David Hansen’s companion Sophia converses with Chinese audiences and holds Saudi Arabian citizenship.

A societal responsibility for innovations?

Blurring the boundaries of humanity may be one of our most pressing ethics issues. Each AI innovation brings new questions specific to its new powers. But they all share one: how does society allocate the ethical responsibility for the innovations? How do we then hold each stakeholder accountable? (There is an even earlier question: How do the creators of AI allocate decision-making power? But that’s for another article.)

When we 3D print human organs, implant AI into human brains, or inject emotional intelligence into artificial intelligence, who is—and who should be—responsible for the known potential consequences? Who is responsible for the consequences that were unimaginable when the innovators unleashed their technology on society?

And as the United States and Europe probe the ethics of AI, they have a responsibility to consider the potentially vast differences in the ethical consequences of their conclusions for lower-income nations. I may be reluctant to hand my car keys to an algorithm in my home towns, Palo Alto and London. But according to a World Bank report, 90 percent of global road deaths occur in low- and middle-income countries even though those countries account for only approximately half the world’s motor vehicles. (The report cites 34 deaths per 100,000 road traffic injuries in lower middle-income countries (LMICs) in 2015 versus 8 per 100,000 in OECD countries.) The need for accessible safe transportation in LMICs suggests that the safety and ethics risks of waiting for perfect driverless cars might outweigh those of the current models. Similarly, most westerners view social media ethics from the perch of readily available internet access and a variety of social media options. However, in some countries Facebook is not only the exclusive social media option, but also almost synonymous with the internet.

Going forward, when we refer to “stakeholders,” these are anybody or anything (policy, institution, flying car) that could influence, or be influenced by, a decision or action. Stakeholders in technological innovations should be: the innovators; the owners of innovation (entrepreneurs, companies, shareholders); regulators and policymakers; consumers; experts in the field, research institutions, and educators; non-governmental associations; and the general public.

Our top concerns should be balancing a pro-innovation, pro-business, and pro-human-progress approach to ethics with protection (but not overprotection) of society. We need a broader conversation about what each stakeholder’s best role is.

Let’s take them one at a time.

Regulators: Define your scope of responsibility and rebalance asymmetric information

First, we must recognize that policymakers and legislators cannot keep up with science and technology, both because they lack the expertise and time and because technological innovations outpace law. Nevertheless, regulators can do much better—in a non-invasive, innovation-friendly way. To start with, they can define their scope of responsibility. US Senator Dianne Feinstein warned senior lawyers at Facebook, Google, and Twitter about their part in Russian meddling in the 2016 US election: “You created these platforms ... and now they’re being misused,”, she said, “and you have to be the ones who do something about it—or we will.” But the senator missed the point. She should have said, “and” we will—particularly with reference to AI.

Regulators can also help rebalance asymmetric information and understanding. They should require simplified and targeted disclosure of risks; they should also mandate alignment of products with appropriate consumers. US financial regulations provide a model in letter and spirit—for example, ensuring that banks market financial products appropriate to the sophistication and financial situation of their clients and requiring disclosure of all material information (“material” defined as using a “reasonable person” standard).

Regulators generally have national authority. But regulation can influence globally (for example, as Silicon Valley considers ramping up to data privacy measures resembling the requirements of the new European data privacy law General Data Protection Regulation [GDPR]). And some competition among regulators for the best (most globally relevant and innovative) ethics oversight is essential to achieving balance and attention to nations with legal systems that may lag behind.

Companies: Stretch scenario planning and simplify risk disclosure

Second, companies should not assume all the risks of their products. But from the earliest stages of development, both innovators and companies should think hard about what harm could be possible (not just what is possible now). They should stretch their imaginations to negative as well as positive uses: How might bad actors weaponize or otherwise misuse their technology? How could uneducated actors inadvertently trigger harm? These questions must be asked from the earliest stages of design conception, not afterward as damage control. And, again, the scenario probing should stretch to nations with varying income levels, political systems, and social norms.

Companies should simplify disclosure of risks to consumers and the public rather than waiting for regulators to take potentially ineffective or over-invasive approaches. Like makers of any other consumer products, tech companies should inform consumers in plain language what they are consenting to when they engage with news filtering algorithms, date a robot, or board one of Elon Musk’s civilian rocket ships. Plain language means relevant to the context, whether it’s Myanmar or London.

Companies should assume that no one will read the user agreements, terms of service, rules, and other legalese. Indeed, as Twitter CEO Jack Dorsey acknowledged in hearings with the US Senate Intelligence Committee, few could understand them if they did. Companies should distill the top risks and user obligations, publish them in plain language, and err on the side of opt-in. Once a company has properly informed users and monitored its products for unexpected problems, it should be able to rely on informed consent for delivering on their ethical responsibility.

Consumers: Consider your own responsibility

Third, once consumers receive reasonable protection from regulators and understandable and transparent guidance from companies, they should recognize their responsibility to read that guidance and accept that informed consent will never offer perfect protection. And they should use common sense: if you shouldn’t do something (for instance, bullying, fraud, or spreading false information), you shouldn’t hijack Facebook’s algorithms to do it or ask your robot to do it on Twitter or on a park bench—irrespective of how far AI laws lag behind your robot’s ability to act. Microsoft’s chatbot Tay, which spewed racist and sexually inappropriate banter after a takeover by trolls, taught us all a lesson. Responsibly, Microsoft CEO Satya Nadella swiftly took Tay off line for investigation.

Educators and researchers: Integrate ethics into education

Fourth, educators and researchers at all levels should integrate ethics responsibility into the study, research, and use of technology. At Stanford University, my course called Ethics on the Edge explores ethical decision-making at the boundaries of humanity: ethics of the unknown and unknowable. But education from early childhood (including education of parents) should integrate ethics, particularly the potential for harming others, whether we’re talking about an artificially intelligent Barbie doll companion or Facebook Messenger Kids. And here again, key stakeholders should consider the must educate children all over the world using products in situations where society and/or parents might not be equipped to do so.

Initiatives such as the UK government’s Centre for Data Ethics and Innovation use a public-centric and education-focused approach in shaping the debate and engaging academia and research institutions. We must ensure that education on ethical questions reaches all populations, not just the elite.

I may not be ready to befriend a robot, let alone date one. I haven’t yet invited Amazon Echo into my home other than for study. But the time to allocate the ethical responsibility for such developments is before the general public normalizes the technology and suffers any negative consequences. And the “general public” must include those in lower and middle-income countries.

This blog post is part of a series from CGD's Study Group on Technology, Comparative Advantage, and Development Prospects that is exploring issues of jobs and paths of economic development in the context of technological change. You can find the group’s latest research at cgdev.org/future-of-work.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.