Midway assessment - AI and Legal Risk: Can the Past and the Future Teach Us About the Present?

External PhD candidate Stuart Weinstein, Aston University, Birmingham, UK, will present his PhD project: "AI and Legal Risk: Can the Past and the Future Teach Us About the Present?".

  Commentator

Professor Malcolm Langford, Department of Public and International Law, University of Oslo

Leader of the assessment

Professor John Asland, Department of Private Law, University of Oslo

Supervisor

Professor Tobias Mahler, Department of Private Law, University of Oslo

Co-supervisors

  • Dr James Brown, Reader in Law, Aston Law School College of Business and Social Sciences, Aston University, Birmingham UK
  • Jonathan Fortnam, Dean, Aston Law School College of Business and Social Sciences, Aston University, Birmingham UK

For outline and draft text, contact Stuart Weinstein.

Abstract

In examining lawyers’ perceptions on the use of artificial intelligence (AI) in their legal work, there is a gap between the hype and reality. While some lawyers see the potential contribution that AI and machine-learning (ML) driven legal tech innovation can make to transform aspects of legal practice, the idea that robot lawyers will replace human lawyers any time soon is exaggerated. Although the attention of the legal tech sector is now firmly focused on the promise that AI and ML algorithms offer when evaluating data, the building of legal expert systems - an area that was once at the centre of AI and law in the 1980s – deserves reconsideration today to assist organisations in the evaluation and treatment of legal risk. Legal expert systems are computer programs that incorporate a legal knowledge base ordered by an  inference engine that provides in explainable interface the specific solutions to legal questions in answer to the input of a set of legal issues and facts.   The ISO 31022:2020 guidelines for the management of legal risk define legal risk as the effect of uncertainty on objectives related to legal, regulatory and contractual matters, and from non-contractual rights and obligations.  This is not to suggest that stepping into a DeLorean time machine to revive 1980s-style expert systems alone will be sufficient to address today’s need for ever-increasingly complex legal risk management tools.  We will also need to go Back to the Future to imagine what more sophisticated AI and ML algorithm driven tools might emerge in the coming years that aid in solving legal risk management questions. As we contemplate the potential of these powerful tools in the future to transform the management of legal risk, a doctrinal consideration of the interrelatedness of responsible AI, the ethical impact of their use in legal practice, and the greater societal issues all of this raises calls upon us to evaluate what should be the legal norms that should govern their usage.

Guidelines

Guidelines for midway assessments

Contact

Senior Executive Officer Mona Østvang Ådum

Published May 4, 2022 10:44 AM - Last modified Jan. 4, 2024 11:21 AM