While the emergence of artificial intelligence heralds a new era of technological development, it also brings with it inherent privacy risks in relation to personal data.
An artificial intelligence (“AI”) system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
The social and economic value of AI is enormous. AI can drive innovation and improve efficiency in a wide range of fields, from manufacturing, transportation, finance to healthcare and education, etc. For businesses, according to a survey published by Microsoft and IDC in May 2019 (https://news.microsoft.com/en-hk/2019/05/08/microsoft-idc-study-artifici...), 40% of Hong Kong organisations had embarked on their AI journeys. The development and use of AI, however, is still largely unregulated worldwide and/or unguided in the data protection field. This is notwithstanding that AI systems, when applied to human beings, usually involve the profiling of individuals and the making of automated decisions on them, thereby posing risks to privacy and other human rights. In particular, associated with the AI boom are personal data privacy risks such as (a) excessive collection and retention of personal data, (b) processing of personal data without sufficient transparency, explainability or informed consent, (c) unauthorised re-identification of individuals, and (d) unfair discrimination to certain social groups.
In an attempt to address such risks, the Office of the Privacy Commissioner for Personal Data (“PCPD”) has been working closely with our counterparts in other jurisdictions in a search for remedial measures. Through the concerted efforts of all, the Global Privacy Assembly (“GPA”) has just adopted a resolution sponsored by the PCPD in October to encourage greater accountability in the development and use of AI.
The GPA, formerly known as the International Conference of Data Protection and Privacy Commissioners (“ICDPPC”), provides an international forum for over 130 data protection authorities from around the globe to discuss and exchange views on privacy issues and the latest international developments.
Resolution on Accountability in the Development and Use of AI
To tackle the potential impact on individuals’ rights that the use of AI may bring, it is important to ensure accountability in the development and use of AI. The Working Group on Ethics and Data Protection in Artificial Intelligence set up under the GPA, of which I am a co-chair, conducted a survey among all GPA members earlier this year with a view to identifying practicable measures for implementing such accountability. Based on the results of the survey, the PCPD, together with some other GPA members, proposed the Resolution on Accountability in the Development and Use of AI (“the Resolution”). The Resolution calls for organisations that develop or use AI systems, GPA members, governments and other stakeholders worldwide to implement accountability measures which are appropriate regarding the risks of interference with human rights.
I will venture to recapitulate the recommended accountability measures below.
For organisations that develop or use AI systems, they are urged to consider implementing accountability measures such as:
- assessing the potential impact to human rights (including privacy rights) before the development and/or use of AI;
- testing the robustness, reliability, accuracy and data security of AI systems before putting them into use; and
- disclosing the results of the privacy and human rights impact assessment of AI, and the use of AI, the data being used and the logic involved to enhance transparency.
Upon the implementation of AI systems, organisations are recommended to continuously monitor and evaluate the performance and impacts of AI by human beings, while identifying an accountable human actor (a) with whom concerns related to automated decisions can be raised and rights can be exercised, and (b) who can trigger evaluation of the decision process and human intervention. To this end, explanations understandable to humans for the automated decisions made by AI should be readily provided upon request.
To enhance resilience and reduce vulnerability, the incorporation of whistleblowing / reporting mechanisms about non-compliances or significant risks in the use of AI is of essence. After all, an inclusive approach of engaging in multi-stakeholder discussions (including with non-governmental organisations, public authorities and academia) to identify and address the wider socio-economic impact of AI and to ensure algorithmic vigilance will help instil trust and confidence in the use of AI systems. The Resolution also advocates preparedness to demonstrate accountability to data protection authorities on request.
The Resolution represents an ethical framework from cradle to grave to ensure the responsible use of AI throughout its life cycle. It is premised on the principle that AI should serve humans, not the other way round. Adoption of these accountability measures institutionalises the ethical use of AI with traceable, auditable and operable technical and organisational processes. This is ethics by design through which we aim to safeguard, and fortify, the protection of personal data privacy notwithstanding the inroads into human lives brought about AI.
I would appeal to all to adopt and implement the aforesaid accountability measures in the development and use of AI.
The full contents of the Resolution can be found on the PCPD’s website at https://www.pcpd.org.hk/english/media/media_statements/files/gpa_resolut....
– By Ada Chung Lai-ling, Barrister, Privacy Commissioner for
Personal Data, Hong Kong