From the 3 March 2025, the Administrative Review Tribunal (Tribunal) updated its practice directions to address the changing needs of the Tribunal. The Administrative Review Tribunal (Expert Evidence) Practice Direction 2026 (Practice Direction) was updated to require experts to declare the use of Generative AI in the preparation of their reports.

This article considers how the updated Practice Direction will impact the gathering of expert evidence in Tribunal cases.

Defining Generative AI

Generative AI is defined in the Practice Direction at paragraph 3.5A as being a system of artificial intelligence that is capable of generating ‘content’ such as text, images or sound in response to prompts. It includes open-source applications, as well as close-source applications.

The Practice Direction identifies the following open-source applications: ChatGPT, Gemini, Microsoft Copilot, Perplexity, Claude, Grok and DeepSeek AI.

Declaring the use of Generative AI

Paragraph 3.5B of the Practice Direction states that an expert preparing a report must state in the report whether it includes content generated by using Generative AI (AI content).

Paragraph 3.5C states that when an expert states that the report includes AI content, the expert must:

  • Clearly identify the AI content and the application(s) used to generate AI content; and
  • Certify that they personally checked all the AI content (including the research and other material cited in support of the content) and is satisfied that it is all accurate and reliable.

When providing expert evidence, the expert is also required to sign and date an ‘Expert Report Cover Sheet’. This Cover Sheet includes an updated checklist so that an expert confirms the information required by the Practice Direction has been included in the report.

Practical considerations for practitioners

Practitioners must now ensure that any use of Generative AI is declared by their experts when filing reports and that the Expert Cover Sheet has been signed, dated and filed with their report.

Consideration should also be given to whether the proportion of AI content used by an expert in a report will impact the weight given to the expert’s opinion by the Tribunal. As provided in paragraph 3.1 of the Practice Direction, experts must be able to provide reasons for any opinions expressed in their reports, as well as any facts and assumptions that inform their reports and the sources for the facts and assumptions. Opinions containing AI content are likely to be non-compliant with the expert’s duty to the Tribunal described in paragraphs 2.1 to 2.3 or the declaration requirement in paragraph 3.5 of the Practice Direction.

Practitioners should ensure they understand the nature and degree of AI content appearing in an expert report before seeking to rely on it in a Tribunal proceeding, noting practitioners must use their best endeavours to assist the Tribunal to make the correct or preferable decision in relation to the proceeding and achieve the Tribunal’s statutory objective (Administrative Review Tribunal Act 2024, subsection 56(1)).

Practitioners should also ensure that experts are able to confirm that all citations and sources that have been referred to in their reports are reliable.

Where to now?

As technological advances in Generative AI continue, courts and tribunals will be creating or updating guidelines and directions to ensure the responsible use of Generative AI by experts and practitioners. It will be increasingly important for practitioners to take steps to ensure that their experts are declaring any use of Generative AI in their reports, and to consider the risks of its use in relation to their client’s case.

Further information / assistance regarding the issues raised in this article is available from the authors, Jane Thomson, Partner, Maddison Dantu-Hann, Senior Associate or your usual contact at Moray & Agnew.