AI and Ethics
Special series: The Perils of Progress. Full white paper here. Author: Nik Dawson
Are you and your organisation prepared to respond to the tech revolution? Learn more.
The proliferation of autonomous AI systems introduces a host of ethical considerations that affects the creation, deployment, and uses of AI in society. Ultimately, AI presents ethical challenges that ask us to consider whether, when and how machines should make decisions about human lives.
The implicit and explicit assumptions made by AI systems pose new challenges to existing ethical frameworks, social norms, and values. Chief among these are the values and interests reflected in AI, determining whose values guide those decisions, and how machines recognise
those values and interests to make decisions.
What’s happened so far?
To date, ethical codes and standards for AI have largely emerged from industry. Acting as a form of ‘soft governance’, industry groups and research organistions such as the International Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM), and the Future of Life Institute have all produced versions of ethical codes and standards for AI. Industry partnerships and business units have also formed to help navigate ethical concerns.
These include the Partnership on Artificial Intelligence, which consists of many of the world’s largest technology firms, and the recently announced DeepMind Ethics & Society unit, part of what is arguably the leading AI research & development firm. These are positive steps and reflect the perspectives from prominent areas of industry.
Governments and policymakers, however, have largely been absent in these nascent discussions. As a result, industry has sought to fill the void through self-organisation.
What are the main issues?
While industry-led ethical codes of practice offer useful platforms for dialogue, they have key limitations in practice. Central among these is the assumption that industry will voluntarily comply with these standards through individual responsibility. This assumption ignores potential misalignment of incentives and conflicts of interest. Private enterprises are incentivised to develop and deploy AI applications to earn economic returns. If these private incentives are inconsistent with the voluntary ethical codes guiding the industry, then society could be placed at risk through unethical applications of AI.
Take the example of AI applied to personal loans and credit in the financial sector. AI that’s used to assess personal loan applications is trying to predict the likelihood of the individual fulfilling their repayment commitments. It does so by comparing the details of the individual applicant (age, gender, location etc.) to millions of other individuals in order to find patterns.
Ethical issues arise, however, when there’s an overreliance on historical data to identify repayment patterns. This overreliance on historical data neglects the significance of social prejudices and potentially exacerbates existing inequalities. As a result, social demographics such as gender, race, and location continue influence the likelihood of accessing credit, even if it isn’t individually accurate, which perpetuates inequalities. While voluntary ethical codes may discourage private enterprises to place strong weightings on particular social demographics in their uses of AI, if doing so affects their economic returns then there’s potential for unethical practices.
Further to the assumption of compliance via individual responsibility, current ethical codes provide little guidance to the individual AI designers faced with such ethical considerations.
They also provide limited insight into assessing ethical implications, and wield little influence over the design and use of AI in practice. This is not to dismiss industry-led ethical codes and standards of AI in their entirety. They serve important moral precedents, particularly at a time where the knowledge gap between the AI industry and policymakers is significant. (In many respects, they’ve been developed precisely because of that gap)
Policymakers, however, are representatives of the public and act on behalf of their citizens’ interests.
Therefore, policymakers have a responsibility to play an active role in the formation, implementation, and oversight ethical principles for AI. At the very least, policymakers have the means to help facilitate accurate, inclusive, and diverse conversations about the future of AI.
Stay a step ahead in a rapidly changing world: