Site icon LoupedIn

Actuaries tackle the ethics of AI and data science

The Institute and Faculty of Actuaries has produced guidance for its members on “ethical or professional issues when carrying out data-science related work”. Rebecca Keating and Laura Wright, authors of the Professional Negligence chapter in The Law of Artificial Intelligence, help me explain why, although the guidance is non-binding, actuaries should consider it carefully.

The Institute’s president, John Taylor, has explained that the guidance is required because of the “great potential impact” of data science and artificial intelligence. The Guide gives examples of the potential benefits and of potential harms of the use of data science. Harms include “financial loss or disadvantage”, “damage to reputation, privacy or psychological wellbeing” and “exclusion from benefits or services”.

Similar concerns have been raised by many bodies, including in the European Commission’s proposal for a regulation laying down harmonised rules on artificial intelligence, which addresses, among many risks, “algorithmic discrimination”, identifies “high-risk” AI systems, such as those used to determine access to education, employment, public services and credit, and prohibits certain uses of AI likely to cause physical or psychological harm.

The Guide recommends that actuaries:

The guidance covers general and specific requirements: from “considering the potential impact that models have on decisions” and “seeking to act in the public interest” to “linking in privacy and ethics into work, as well as legal and regulatory requirements”.

The Guide includes a helpful “implementation checklist”, which sets out a summary of practices that those adopting AI may wish to use. It would be wise to see the list as something that should be referred to on an ongoing basis and not only at the outset of a project. This will encourage adherence to standards throughout the life of the project.

The decision in Nederlandse Reassurantie Groep Holding NV v Bacon & Woodrow Holding [1997] LRLR 678, which deals with a claim of negligence against an actuary, aligns with the general principle that a professional will be judged by reference to the standard of work reasonably expected of that professional. While the Guide is “non-binding” on the Institute’s members, guidance from professional bodies (whether binding or not) may influence a Court’s assessment of whether a professional has exercised reasonable skill and care in legal disputes concerning professional negligence.

Finally, it is important to note that the Guide is intended to “complement existing ethical and professional guidance” and therefore regard should be had to the broader ethical and professional guidance.

Matt Hervey is Head of Artificial Intelligence (UK) at Gowling WLG (UK) and advises on Artificial Intelligence (AI) and IP across all sectors, including automotive, life sciences, finance and retail. Find out more about Matt Hervey on the Gowling WLG website. He is co-editor of The Law of Artificial Intelligence (Sweet & Maxwell).

Exit mobile version