Machine Lawyering – Watch those Hyperconnectivity Threats

INTRODUCTION

Many a law practitioner gets excited about the potential gain of machine lawyering and fascinated by the mythical power of the smart and non-biased algorithms which are believed to be capable of making better decision than humans (Groopman, 2019), but hesitates to proceed further for fear of the overwhelming techno-ethical threats posed by each of the technologies involved. That fear arises not because lawyers are not conversant with the technologies but not familiar with the functionalities of and the particular threats attributed by the technologies. This paper attempts to elucidate the related techno-ethical threats and socio-ethical issues and the ramifications of these threats to mitigate that fear, and introduce some method of analysis for untangling the issues arising from the threats to boost confidence together with some “advisables” for preventing or minimizing the threats to strengthen further that confidence.

MACHINE LAWYERING & HYPERCONNECTIVITY

Machine lawyering is a summation of algorithms and the hyperconnected technologies (Lee, 2020). The power of the algorithms lies in these technologies (the physical objects and virtual subsystems that IoT combines culminating in the so-called hyperconnectivity). In particular, artificial intelligence/machine learning (AI/ML) is utilized for processing information and detecting patterns; the Cloud for deploying, connecting and delivering the necessary information; the Internet of Things (IoT) for capturing data via sensors to collect and "feel" information about their environments; Big Data for managing and accommodating the huge volume of data of varied structures at high velocity; and Blockchain for ensuring data immutability. Hyperconnectivity threat is hence multi-parental, varied, and infinitely cross-infectious by virtue of the theoretically unlimited number of objects and subsystems alluded to earlier that the IoT connects from which they originate, with security and privacy being the most prominent at issue. Notwithstanding, the environment remains technology-driven information-intensive; data protection remains a chronic problem unless and until our indifference to or disrespect of ethics that remains a prominent element in the root cause diminishes.

HYPERCONNECTIVITY TECHNOLOGIES

AI is an interdisciplinary science with multiple approaches located in the realm of Computer Science; its aim is to build smart machines (computers) capable of performing tasks that typically require human intelligence (such as "learning" and "problem-solving"). Associated with advances in machine learning and deep learning, AI causes a paradigm shift in virtually every sector of the human community. ML is a subset of artificial intelligence aiming to build algorithms and statistical models that can perform a specific task without using explicit instructions but relying on patterns and inference instead. Deep learning is a type of machine learning technique that powers the most human-like artificial intelligence by running inputs through algorithms to create an “artificial neural network” that can learn and make intelligent decisions on its own. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go "deep" in its learning, making connections and weighting input for the best results.

Cloud Computing is a paradigm of anything involved in delivering hosted services over the Internet. These services are broadly categorized as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS), and deployed as public cloud (services sold to anyone on the Internet by third-party cloud service providers), private cloud (a proprietary network or a data center that supplies hosted services to a limited number of people), and hybrid cloud (a combination of public cloud services and an on-premises private cloud, with orchestration and automation between the two). The Cloud is characterized by three features: “sold on demand”, “being elastic” (a user can have as much or as little of a service as they want at any given time), and “fully managed” (the consumer needs nothing but a personal computer and Internet access).

Big Data is data generated whenever we use apps on devices such as smartphones, and produced at a variety sources (e.g., healthcare equipment at the hospital or doctor's office) together with a digital trail of information. It is not only data storages but also processes. Big data is characterized by volume, velocity, and variety.

IoT is a system of interrelated ‘things’, each of which is provided with unique identifiers and an ability to transfer data over a network without human-to-human or human-to-computer interaction. The ‘things’ can be animals or people, and mechanical and digital machines. IoT can be formed as new technologies converge.

HYPERCONNECTIVITY THREATS

Hyperconnectivity Threats arise as technology influences our moral decision making, for example, IoT erodes people’s control over their own lives and Big data and the IoT will make it harder for us to control our own lives: while we grow increasingly transparent, the powerful corporations and government institutions are becoming more opaque to us, and include all the common IT threats such as data breach, insiders and denial of service or distributive denial of serve leading ultimately to privacy and security.

Cloud threats attribute to a multi-tenant environment that shares hardware infrastructure between numerous customers, three being predominant: Reduced Control and Visibility (formed once assets/operations having been transitioned to the Cloud), Incomplete Data Deletion (due to data scattering so that the remnants of the data could be picked up by attackers), and Vendor Lock-In (occurred when changing vendor/provider resulting in enormous cost/effort/schedule time).

Big Data threats in the form of surveillance, disclosure, discrimination, and lack of transparency exemplified by and including representativeness, sample selection, non-human and bad-faith actors, and the reproduction of social biases through AI and machine learning culminate in serious privacy and security issues. Other noteworthy threats include 1) exposure of volumes of data in real-time by cybersecurity tools for protecting networks and data from attacks; and 2) false confidence induced by conclusions derived from erroneous data patterns in big data analytics when very large volumes of data involving many variables which may show bogus patterns or correlations, thereby establishing relationships between variables by the sheer volume of sample data when in fact no such relationships exist that may mislead/misguide decision-makers.

AI specific threats are mainly the bias built in algorithm, history on which AI relies, and fairness of outcome that AI delivers. Deep learning algorithms are, for example, only as smart as the data they are given by the human trainer or as dexterous as the instructions created by the designer. In reality, the human trainer or designer may favour this or dislike that, or have experienced this or that in the past – this is where bias creeps into the data. Industry experts warn that the problem of bias in ML is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it, and that almost no one is making an effort to identify or correct it despite the fact that algorithmic bias is already pervasive in many industries!

IoT threats arise mainly from the huge number of devices closely knitted and inter-infected and the concentration of personal data scavenged and manipulated. IoT threats abound, those of privacy concerns are, for example, selling of the users’ confidential data by vendors, billboards with hidden cameras that track the demographics of passersby who stopped to read the advertisement, “spy on people in their own homes” by the Internet-connected appliances such as televisions and kitchen appliances, hacking via on-board network computer-controlled devices in automobiles such as brakes, hood and trunk releases, and heater and dashboard; and those causing security worries include weak authentication, forgetting to change default credentials, and unencrypted messages sent between devices.

ADVISABLES

To address threats due to Big Data, it is advisable

  • to recognize social biases that may be reproduced through AI and ML
  • to understand the types of personal information deemed sharable and with whom, whether cyber communication can be transmitted without anyone else viewing such communication, where concerns of lawful interception may also be included
  • to be aware of anonymous communication, and personal data getting tweaked by analytics engines to produce completely erroneous results

To deal with AI threats, it is advisable

  • to beware of the assumptions about human life made by machine learning systems are narrow, normative and laden with error
  • to correct data biases in machine learning data sets and understand the dangers associated with creating systems that deliver biased results
  • to collect the right types of data and accept that the initial version of models are going to underperform on some aspects
  • to clearly articulate what the bias in the data is
  • to recognize that
  • “Nearly all data has bias, and if you try to eliminate it, you introduce another type of bias"
  • “When a programmer attempts to manually fix a bias or prejudice, the cognitive bias of that programmer is introduced, and it affects the outcomes”
  • “You can’t eliminate it, so it’s important to understand it and the impact it has on the decision you are making”
  •  “Bias in machine learning can be complicated, especially when the algorithm is hidden -- as with black boxes -- and where biases were introduced at some point along the way
  • “Good data takes time and money”
  • “Bad data beautifully visualized is still bad data”

It is advisable for IoT users to observe the requirements of 

  • data security – to ensure at design time that data collection, storage and processing are secure, and to adopt a "defence in depth" approach and encrypt data at each stage
  • data consent – to establish freedom of choice as to what data to share with IoT providers and the right to be informed of what data gets exposed
  • data minimization – to allow IoT providers to collect only the data needed and to retain the collected information only for a specified period of time

SOCIO-ETHICAL ISSUES

Situations where socio-ethical issues arising from use of IoT technologies are getting common and popular. Three are cited in this context.

Case 1 – Driverless cars: Tens of thousands of people die on the road; autonomous cars exacerbate the situation. In this case, the choice about who is going to be harmed and who’s not is ‘delegated’ to the robot driver. This gives rise to legal and ethical issues. In the event of an accident, at stake is the right and responsibility of the robot driver, the human pedestrians, passengers, programmer, and owners of the damaged properties, the manufacturers of the autonomous car and the robot driver, and the government (in particular, the transport agency) 

Case 2 –Autonomous weapons: The situation is similar to the case of driverless car that the decision is delegated to a machine as who should live and who should die. The killer robot is programmed to “kill anything moving” so a soldier dressed as a civilian or a civilian dressed in uniform, if moving, would be killed. Now the “wrong guy” is killed. Who should be held responsible for the killer robots – the politicians, the military, the software company, or the engineers and scientists who design and implement the autonomous arsenal? 

Case 3 – The criminal justice system which is a set of risk assessment algorithms judges an individual’s probability of re-committing a crime after having been released from prison, underestimates the threat from white defendants and overestimates the threat from black defendants, resulting in white defendants being given leaner sentences and black defendants being given harsher sentences, due to a built-in systematic bias. The company that produced the algorithm, when asked, admitted that the system makes mistakes about half the time and that it took in something like 137 factors, but race is not one of them. (ProPublica, 2016) Is AI to be blamed for the mistakes made in the criminal justice software?

ETHICAL ANALYSIS

Scenario

Chosen for this purpose is a spamming issue in a case of soliciting business: Two lawyers, Tom and Dick, posted an email message to thousands of newsgroups offering their service. The spam created thereof outraged a large number of newsgroup subscribers. The email flames that were returned by the angry subscribers overloaded the lawyers’ service system. The ISP was forced to close the lawyers’ account due to public pressure. The duo were unapologetic; they claimed that the 25,000 customer enquiries they received for one night’s work and the $100,000 worth of business well exceeded a $20 investment. They even threatened to sue their ISP.

This means war between Tom & Dick and the other subscribers boiling down the question: Is the duo ethical? This is contentious.

  1. The duo, while has a right to send email to newsgroups, has caused traffic congestion. Do they have a case to answer?
  2. The ISP closed the duo’s account. Is the action justified?
  3. The duo threatened to sue the ISP. Can they do that?
  4. Tom and Dick considered the action worthwhile: spending $20 for a return of 25,000 enquires and $100,000 worth of business. Aren’t they egoistic and selfish?

There are three main stakeholders plus possibly the social media users at large. Taking the view of welfare (for Tom & Dick on ground of utility or consequence), their spamming act seems ethical, but taking the view of autonomy (of fellow subscribers on ground of duty and virtue), that act is unethical. There is no law yet to deal with spamming, in particular the flooding of response to the duo’s ad, but appearing unapologetic Tom & Dick will have a question to answer from the point of view of Golden Rule, virtue ethics or social contract theory. A rough sketch of the identified stakeholders’ concerns are shown in the Ethical Matrix (Mepham, et al, 2006; Lee, 2017) – Table 1 below.

Table 1 – Ethical matrix – a First-cut Result

 Respect for Stakeholders

Well-being (utilitarian/consequential)

Autonomy (Duty & Rights  concerns)

Justice/Fairness

Tom & Dick

Income; Promotion of their service

Gainful use of newsgroups

Lawfulness Sustained convenience Equal right of subscribers

Other Newsgroup subscribers

Communication

Right to use the facility provided by newsgroup

 

ISP

Subscriber base and revenue

Right/entitlement of all subscribers

 

Interpretation & closing

  1. They did it because they believed there would be other lawyers doing the same thing. They argued on relativistic grounds. But relativism doesn’t tell their action is right or wrong. They might not foresaw the traffic congestion problem, yet the result was undesirable as several ethical principles are violated: consequentialism – the consequence is bad.
  2. The ISP may have a question on ‘contract’ to answer but on deontic grounds, the ISP has a duty to care for all clients. On balance of that duty, the ISP’s action is supported by utilitarianism – the inconvenience suffered by other subscribers outweighs the duo’s gain.
  3. Threatening to sue the ISP is an abuse of their privileged knowledge, disrespect to professionalism, and a violation of code of conduct.
  4. Spending $20 for a return of 25,000 enquiries and a possible income of $100,000 is well worthwhile for Tom and Dick, and utilitarianism supports that action. But at the same time, that action breaks all the common ethical principles: it is against the consequentialist principle because it causes the system to shut down – an inconvenience to other users; the categorical imperative because they didn’t respect the other subscribers and the ISP treating them as a means rather than an end; the Golden Rule, because they certain would wish to be exploited by others.

Having assessed the ethical aspects implied certainly give us a feel about the ethical implications. To see an idea about the quality of decision, the following step would be to check with against the Hexa-dimension Metric, a six-factor measure of the quality of a decision (Lee, 2021). The duo’s decision does not involve how the technology is used despite social media is very much an application of information technology, nor does it affect the environment in terms of air or noise pollution even though spamming may consume a little bit of electricity, so ecology is not an issue. However, the responses attracted by the duo’s ad did amount to a deprivation of the other subscribers’ right to service and a reduction of the ISP’s revenue. Overall, the action seems financially viable in view of a good return (25,000 enquiries and a possible income of $100,000) for a meager investment ($25.00), and free of the law (because there is no law yet to deal with the issue), but not acceptable ethically and undesirable socially. It is advisable for Tom and Dick to refrain from such an act lest the damage to their professional reputation inflicted by neglecting knowingly the ethical and social dimensions will be beyond any means of repair.  Table 2 provides a glimpse.

Table 2 – Hexa-dimension Metric

The measures

Verdicts

Check

Financial viability

Spending $20 for a return of 25,000 enquires and $100,000 worth of business deemed worthwhile

Technical effectiveness

Not a technical issue; not applicable

n. a.

Legal validity

No law yet to deal with the issue

Ethical acceptability

Consequentialism tolerates on one hand and objects on the other; Act doesn’t look to the Golden Rule and Virtue Ethics

×

Social desirability

Questionable in view of categorical imperative and professionalism re rights and duty

×

Ecological sustainability

Not an ecology issue; air pollution or climate are non-issue; obviously not applicable

n. a.

CONCLUSION

Machine lawyering should be encouraged; it is not a fad but a new form of legal practice with a steady future. Hyperconnectivity threats that are ever-growing in number and kind – for example, the surge of COVID-19 and pandemic-themed security threats, are here to stay.  Users and providers of machine lawyering are well-advised to be aware of techno-ethical threats posed by these technologies, and recognize these threats and the socio-ethical issues thereof. However, these issues are complex and intertwined, and attentive analysis using tools such as Ethical Matrix and Hexa-dimension Metric is advisable.

REFERENCES

  1. Groopman, Jessica (2019). “AI, blockchain and IoT convergence improves daily applications”. https://internetofthingsagenda.techtarget.com/tip/AI-blockchain-and-IoT-convergence-improves-daily-applications? (accessed 13 November 2019)
  2. Lee, W.W. (2017). “Ethical Computing continues from Problem to Solution”. In Khosrow-Pour, M. (Ed.), Encyclopedia of Information Science and Technology (4th ed., Ch. 423, pp. 4884-4897) [Reprinted in Advanced Methodologies and Technologies in System Security, Information Privacy, and Forensics (Ch. 17, pp 206-221) https://www.igi-global.com/chapter/ethical-computing-continues-from-problem-to-solution/213652]
  3. Lee, W.W. (2020). “Machine lawyering is the summation of algorithm plus data-driven DX technologies” CFRED CUHK Law https://www.legalanalytics.law.cuhk.edu.hk/post/machine-lawyering-is-the-summation-of-algorithm-plus-data-driven-dx-technologies, 20 January 2020
  4. Lee, W.W. (2021). “Hexa-Dimension Metric, Ethical Matrix, Cybersecurity”. In Khosrow-Pour, M. (Ed.), Encyclopedia of Organization Knowledge, Administration, and Technologies (in press)
  5. Mepham, B., Kaiser, M., Thorstensen, E., Tomkins, S., & Millar, K. (2006). Ethical Matrix Manual. The Hague: LEI http://www.ethicaltools.info/content/ ET2 Manual EM (Binnenwerk 45p).pdf
  6. ProPublica (2016). “Machine Bias”. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing accessed 23 April 2018 

Founder & President, The Computer Ethics Society