Artificial intelligence (AI), the capability of machines to imitate human behavior, has transformed our lives for the better. Through AI, machines can analyse images, comprehend speech, interact in natural ways and make predictions using data. Recent advances in AI have been fueled by improvements in machine learning techniques that can identify patterns in large data sets and use these patterns to make predictions or recommendations. We are witnessing quick development and broad applications of AI in different industries around the world, such as in FSI, healthcare, legal, manufacturing, automobile, and it is now just at the very beginning of the digital transformation journey.
Whilst it is exciting that we are entering this new era, AI also brings us some new ethical and legal challenges. For example, AI requires “big data”, often personal data which is collected at speed and generated on a large scale. The process of data collection, use and transfer has presented challenges for data privacy frameworks around the world. There are also possible AI ethics issues such as whether machines may replace humans and eliminate jobs; how we may manage machines so that they will not be out of control; how humans should use AI in a moral and ethical way, etc.
In recent years, governmental and corporate organisations are closer to reaching a consensus on the principles that should govern the development of ethical and responsible AI. Drawing from guidance around the world, including Australia’s AI Ethics Framework, the European Commission’s Ethics Guidelines for Trustworthy AI, Hong Kong’s Ethical Accountability Framework for Hong Kong, The Beijing AI Principles and a multinational tech company’s AI principles framework, we can see broad consensus in the following areas:
Fairness, Inclusiveness, Non-Discrimination
AI systems should treat everyone fairly and impartially and not affect similarly situated groups of people in different ways. AI must not limit opportunities for anyone or be programmed to make biased or discriminatory decisions. AI should benefit everyone and address a broad range of human needs and experience, inclusively.
Reliability and Safety, Human Oversight
It is crucial to ensure AI technology is reliable and safe. The complexity of AI technologies has fueled fears that AI systems may cause harm in the face of unforeseen circumstances or that they can be manipulated to act in harmful ways. Trust will depend on whether AI systems can be operated reliably, safely and consistently even under unexpected conditions where consequential decisions are involved.
Since the foundation of AI is data, it is important that such personal data be securely stored, used safely and to a good end, and comply with applicable privacy law on collection, use and storage of data. There should be clear policies and transparency about the data collection and use, and good controls so people can make choices about how their data is used.
There should be transparency about how systems have been built and function. As AI systems are increasingly involved in decisions that influence people’s lives, providing clear explanations about how these systems operate as well as mechanisms for accountability are essential.
People who design and deploy AI systems must be accountable for how their systems operate. We need to ensure that computers will remain accountable to people and that the people who design computers remain accountable to everyone else.
As well as the above, there are considerations of wellbeing, and net benefits for society and environment. As AI is here for good, we can be optimistic that as the world further refines these governance principles, we can benefit from its use in an ethical and responsible way.