Powered by Michael Lee Chambers
Tribunals International Protocols

Practical Protocols
for Modern Arbitration

TIPS is a small, practitioner-led council of senior arbitrators and emerging practitioners producing clear, immediately usable protocols on the procedural and ethical challenges that matter most in international arbitration today.

Clearer Process Fewer Disputes Stronger Confidence
TIPS — Protocol Series

Protocols That Strengthen Arbitral Practice

Each TIPS Protocol addresses a specific procedural or ethical issue. Designed to be adopted by agreement, incorporated into procedural orders, or used as field guidance.


Current & Upcoming Protocols
TIPS #01 — AI Use in Arbitration Available ↓
TIPS #02 — Topic Under Review Coming Soon
TIPS #03 — To Be Determined Forthcoming
Practitioner-Led · Open Access arbitrationtips.com
About TIPS

Protocols That Strengthen
Arbitral Practice.

TIPS — Tribunals International Protocols — is a small, independent, practitioner-led council focused on producing practical protocols on emerging and recurring procedural and ethical issues in international arbitration.

The objective is pragmatic: clearer process, fewer procedural disputes, and stronger confidence in arbitration. Each TIPS Protocol provides immediate, usable guidance — not aspirational frameworks that gather dust.

TIPS Protocols are shaped through a structured but flexible process. Founding Circle members contribute by reviewing drafts, voting on proposed directions, flagging issues, and sense-checking whether a protocol would genuinely help users and tribunals in practice.

All TIPS Protocols are published as open-access instruments — free to download, adopt, and adapt. TIPS is not affiliated with any arbitral institution, law firm, or commercial entity.

Practitioner-Led

TIPS is shaped by practitioners, for practitioners. Every protocol is designed by people who use arbitration — not by institutions managing it.

Practical, Not Theoretical

Each protocol must answer a real question that arises in practice. If it would not help a user or tribunal in the room, it does not meet the TIPS standard.

Open Access

All TIPS Protocols are published freely. There are no membership fees, paywalls, or institutional affiliations. The work belongs to the profession.

Independent

TIPS is not affiliated with any arbitral institution, law firm, or commercial entity. Its only interest is the quality of arbitral practice.

Deliberately Small

A small, carefully selected Founding Circle produces sharper, more coherent protocols than large committees. Quality over volume.

Responsive

Protocol topics are driven by emerging practitioner need. TIPS moves when the profession needs it to — not on an institutional calendar.

Featured Protocol

TIPS Protocol #01
The Use of AI in Arbitration

The first TIPS Protocol provides a concise, immediately implementable governance framework for all arbitration participants navigating AI tools — covering permitted use, prohibitions, targeted disclosure triggers, confidentiality classification, and verification duties.

Designed to be adopted by party agreement, incorporated into a procedural order, or used as field guidance. Open access — free to download and adopt.

Read Protocol #01 →
Scope
Counsel, tribunals, experts & witnesses
Status
In Review
Version
Full Draft v.0.3.3
Access
Open · Free Download
Key Framework Elements
Human Accountability Traffic-Light Confidentiality Disclosure Triggers Verification Duties Model Procedural Order
Protocol Status:
Forthcoming — Topic open Open — Accepting committee applications In Drafting — Committee at work In Review — Under peer review Active — Published
TIPS Protocol #01

AI Use in Arbitration v.0.3.3

1. Preface

(a) Artificial intelligence tools are now routinely used in international arbitration—by parties and counsel to draft and analyze submissions, by experts to structure and test opinions, and increasingly by tribunals and tribunal support teams to manage complex records and process information. As these tools become a professional norm, arbitration users can benefit from a shared and practical set of guardrails that preserves the core attributes of arbitration: procedural fairness, integrity of the record, and confidentiality.

(b) This Protocol (the "Protocol") provides concise, practical guidance on the responsible use of AI tools in arbitration proceedings. It is designed to be adopted as a stand-alone framework, incorporated into a procedural order, or adapted by agreement of the parties and direction of the tribunal.

(c) The Protocol proceeds from a simple premise: AI tools may support arbitration, but they do not change the allocation of responsibility. Persons who use AI remain accountable for what is presented in the proceeding and for how AI is used. The tribunal's decisional responsibility remains personal, non-delegable, and grounded in the record.

2. Purpose

The purpose of this Protocol is to:

  1. encourage the beneficial use of AI tools where appropriate;
  2. mitigate material risks associated with AI use, including inaccuracies, fabricated authorities, confidentiality breaches, and due-process concerns;
  3. promote procedural clarity through targeted transparency where AI use may affect the record, fairness, or confidentiality; and
  4. preserve confidence in the enforceability and legitimacy of arbitral outcomes.

3. Core Principles

This Protocol is guided by the following principles, to be applied proportionately and case-by-case:

  1. Responsibility. Any person who uses an AI tool remains fully responsible for the content and accuracy of their submissions, evidence, and communications, and for compliance with applicable duties of candor, confidentiality, and professional conduct.
  2. Human Oversight / Non-Delegation. AI tools may assist or execute tasks; they do not replace independent judgment. The tribunal's decisional responsibility is non-delegable. Where AI agents are used to execute tasks, a 'Human-in-the-Loop' verification process is required to ensure the integrity of the action taken.
  3. Integrity of the Record. AI tools must not be used to fabricate or distort evidence, authorities, or factual assertions. Citations and AI-assisted factual assertions shall be verified before reliance, and errors shall be corrected promptly.
  4. Procedural Fairness and Targeted Transparency. Where AI use could materially affect a party's ability to understand, test, or respond to the case presented, targeted disclosure may be required.
  5. Confidentiality and Data Protection. Confidential case information should not be input into AI tools except under appropriate safeguards, and where required, only with consent or approval consistent with tribunal directions and party agreement.
  6. Adaptability. This Protocol is intended to be practical and adaptable to the needs and risk profile of the particular arbitration.

4. Operative Provisions

4.1 Definitions

(a) "AI Tool" means any software, system, or service that uses artificial intelligence or machine-learning techniques to generate, transform, summarize, translate, analyze, classify, or predict content, or autonomously execute tasks, including generative AI tools and AI agents.

(b) Interpretive guidance. For interpretive guidance only, "AI system" may be understood consistently with widely used regulatory and policy definitions (including OECD and EU frameworks), as updated from time to time.

(c) "Material Use" means the use of an AI Tool in a manner that could reasonably affect any of the following:

  1. the evidentiary record or the presentation of facts;
  2. the presentation of legal authorities or the characterization of those authorities;
  3. a party's opportunity to understand, test, or respond to the case presented;
  4. confidentiality, privacy, or data protection;
  5. the autonomous execution of substantive procedural, analytical or research tasks by an AI agent; or
  6. the tribunal's decisional reasoning.

In the event of a dispute, the tribunal retains final discretion to determine whether a specific use of an AI Tool qualifies as a Material Use.

(d) Non-exhaustive examples of Material Use include:

  1. AI proposing, generating, or materially changing substantive factual assertions relied upon;
  2. AI output being quoted or relied upon as a "source";
  3. AI being used to create demonstratives/exhibits relied upon;
  4. AI materially shaping witness statements or expert reports;
  5. AI agents autonomously performing cross-record analysis or database research without granular human verification of the search parameters or outputs.
  6. any use involving Confidential Case Material in a Yellow/Red tool (or where key safeguards are Unknown).

(e) "Confidential Case Material" means any non-public information relating to the arbitration, including pleadings, evidence, transcripts, deliberations, drafts, and party or tribunal communications, as well as any protected or privileged material.

(f) "Participant" includes the parties, counsel, the arbitral tribunal, tribunal secretaries or assistants, and experts and witnesses.

(g) "Party's Agents" include counsel, experts, witnesses, consultants, e-discovery vendors, translators, graphics consultants, and other service providers acting on a party's behalf in connection with the arbitration.

4.2 Responsibility and oversight (including agents)

(a) Responsibility remains with the user. Any Participant using an AI Tool remains fully responsible for the content, autonomous actions, and consequences of that use.

(b) Party responsible for agents and vendors. Each party is responsible for AI use by its Agents (including third-party vendors or service providers engaged for the arbitration) when acting on its behalf, and for ensuring that such use complies with this Protocol.

(c) AI is not authority. AI Tool outputs are tools; they are not legal authority or evidence unless independently supported by admissible sources.

4.3 Practical guidance: permitted and not permitted uses (non-exhaustive)

(a) Generally permitted (subject to Sections 4.4–4.7 and tribunal directions)

  1. Drafting and editing support. improving structure, style, readability, grammar, and formatting, provided the user remains responsible for the content and verifies accuracy.
  2. Summarization and organization. summarizing, indexing, organizing, and preparing internal chronologies, issue lists, and record digests, subject to confidentiality safeguards.
  3. Research assistance (as a tool, not authority). using AI to identify search terms, topics, or potentially relevant sources, provided that any cited authority and any relied-upon factual assertion is independently verified.
  4. Translation/transcription/summarization as an internal aid. using AI for non-disputed or internal purposes, subject to Section 4.5(d) if relied upon as accurate for a disputed point.
  5. Expert/witness assistance (limited). improving language, clarity, structure, and translation, provided the testimony/opinion remains the witness's or expert's own, and subject to Section 4.5(a) if AI materially shapes the content.
  6. Tribunal administrative assistance. organizational tasks (e.g., managing document sets, summarizing material already in the record, drafting procedural schedules), consistent with Section 4.6.

(b) Not permitted (unless expressly agreed by the parties or directed by the tribunal)

  1. Fabrication. using AI to fabricate evidence, authorities, quotations, references, or factual assertions.
  2. Unverified citation. citing any authority (award, case, commentary, rule, or source) that has not been independently verified.
  3. Improper handling of Confidential Case Material. inputting Confidential Case Material into an AI Tool where safeguards are not in place or the tool's retention/training practices are unclear/unknown, except as permitted under Section 4.4.
  4. Undisclosed extra-record reliance by the tribunal. the tribunal shall not rely on AI-generated facts, authorities, or analysis outside the record without providing the parties an opportunity to comment (Section 4.6).
  5. Use that undermines fairness. using AI in a manner that materially impairs the other party's ability to understand, test, or respond to the case presented.
  6. Autonomous Action without Oversight. allowing an AI Tool to execute substantive tasks or reach conclusions without a 'Human-in-the-Loop' review mechanism.

4.4 Confidentiality and data handling (traffic-light framework)

(a) General rule. Confidential Case Material shall not be input into an AI Tool unless appropriate safeguards are in place and, where required, with consent or approval.

(b) Traffic-light classification (non-exhaustive).

  1. Green Tool: tools with strong confidentiality protections (e.g., closed/enterprise tools with contractual or technical safeguards such as no training on user inputs, controlled access, and reasonable retention controls). Use of Green Tools for Confidential Case Material is generally permitted.
  2. Yellow Tools: tools whose protections are unclear, unverified, or incomplete. Use of Confidential Case Material with Yellow Tools requires party consent and/or tribunal approval, as applicable.
  3. Red Tools: tools that train on inputs or have unclear retention/sharing practices. Confidential Case Material should not be input into Red Tools.

(c) Default to Yellow if Unknown. A tool shall be treated as Yellow where any key safeguard criterion is Unknown.

(d) Consent/approval pathway. Where consent or approval is required, it may be provided by (i) agreement of the parties, or (ii) direction of the tribunal after consultation with the parties, depending on the circumstances of the case.

(e) Tool classification criteria. Non-exhaustive criteria for classification are set out in Annex A.

4.5 Reliance and targeted transparency (where AI use is material)

(a) Witness/expert evidence (material shaping). AI use "materially shapes" witness or expert evidence when it goes beyond language/formatting assistance and reasonably could affect the substance of the testimony/opinion, including by proposing factual content, framing key factual narratives, generating or materially altering analytical steps, or suggesting conclusions. Where AI use materially shapes a witness statement or expert report, the presenting party shall provide a short disclosure with the affected submission or promptly when the trigger arises, identifying:

  1. that an AI Tool was used in a manner that materially shaped the evidence;
  2. the general purpose of the AI use; and
  3. confirmation that the witness/expert and counsel have verified the accuracy of any AI-assisted factual assertions and any cited authorities.

(b) AI output relied upon as a "source." AI Tool outputs should not be cited or relied upon as a source of authority. If a party seeks to rely on AI-generated content as a source (rather than merely as an internal tool), it shall disclose that reliance and provide an appropriate basis for testing reliability.

(c) AI-generated demonstratives/exhibits. Where an AI Tool is used to create demonstratives or exhibits that could affect evidentiary assessment (e.g., reconstructed images, synthetic audio/video, or materially generated visuals), the party intending to rely on such material shall disclose the AI use and provide sufficient information to allow the other party and the tribunal to assess reliability and contest the material, as appropriate.

(d) Translation/transcription/summarization relied upon as accurate. Translation, transcription, and summarization may be used as internal aids. If a party intends to rely on such AI-assisted output as accurate for a disputed point, the party shall disclose the AI use and provide a reasonable basis for accuracy (including, where appropriate, a human check or a reliable verification method).

(e) AI Use Notice (simple form). Disclosures under this Section may be made by a short notice in the form set out in Appendix D.

(f) Witness/expert attestation (when Section 4.5(a) disclosure is made). The witness or expert shall provide a short attestation substantially in the following form:

"I confirm that the contents of this statement/report are my own. I used an AI Tool only to assist with [language/structure/translation and/or other disclosed purposes]. I have reviewed the statement/report and confirm that any factual assertions and any cited authorities have been verified to the best of my knowledge."

(g) Protection of prompts and logs. To prevent unnecessary satellite disputes and protect work-product, no Participant shall be required to disclose raw AI prompts, inputs, or system logs absent their express consent.

4.6 Tribunal use and tribunal support (including secretaries)

(a) Permitted tribunal uses. The tribunal may use AI Tools for administrative and organizational assistance, including managing the record, summarizing materials already in the record, and preparing procedural drafts.

(b) No undisclosed extra-record reliance. The tribunal shall not rely on AI-generated facts, authorities, or analysis outside the record without providing the parties an opportunity to comment.

(c) Tribunal disclosure (targeted). Where the tribunal's use of an AI Tool materially affects the procedure or could affect the parties' opportunity to comment (including through extra-record reliance or AI-supported analysis that introduces new issues), the tribunal should consider appropriate notice to the parties and an opportunity to be heard.

(d) Tribunal secretaries and support. AI use by a tribunal secretary or other tribunal support personnel shall be treated as AI use by the tribunal.

(e) Deliberations (confidentiality reminder). Tribunal deliberations and draft award materials remain confidential and shall be handled consistent with confidentiality obligations and Section 4.4.

4.7 Verification and correction duties

(a) Verification of authorities and citations. Any Participant who submits or relies on citations or authorities shall independently verify their existence and accuracy.

(b) Verification of AI-assisted factual assertions. Any Participant who submits or relies on AI-assisted factual assertions shall take reasonable steps to verify their accuracy before reliance.

(c) Prompt correction. If an AI-related material error is discovered, the responsible Participant shall promptly correct it in the manner appropriate to the circumstances (including by notice to the tribunal and other parties where required to avoid prejudice).

(d) Optional integrity statement. At key milestones (e.g., principal submissions; witness statements; expert reports; post-hearing briefs), a party may provide a short integrity statement in the form set out in Appendix E.

4.8 Implementation and consequences

(a) Early discussion. The parties and the tribunal are encouraged to address AI use at the first case management conference.

(b) Adoption. This Protocol may be adopted in whole or in part by agreement of the parties and/or direction of the tribunal, and may be recorded in a procedural order or the case management conference minutes.

(c) Consequences of non-compliance. In the event of non-compliance with this Protocol, the tribunal may take such measures as are necessary and proportionate to preserve procedural fairness, the integrity of the record, and confidentiality, expressly applying an escalation principle based on the materiality of the breach, the degree of prejudice caused, and proportionality.

(d) Examples (non-exhaustive). Measures may include: (i) correction/clarification; (ii) directions for targeted disclosure; (iii) directions for re-submission or verification; (iv) reduced weight or disregard of tainted material; (v) cost consequences; and (vi) adverse inferences or exclusion only where necessary for fairness.

5. Non-Exclusivity

This Protocol is not intended to limit the tribunal's procedural powers or any mandatory requirements of applicable law, institutional rules, or professional obligations. It should be applied consistently with those requirements and with party agreement and tribunal directions in the particular case.

Appendix A — Model Agreement on AI Use

Introduction and Usage Note: This model agreement facilitates the immediate and comprehensive adoption of the TIPS Protocol. It is designed to be executed as a standalone agreement or incorporated into the minutes of the first Case Management Conference (CMC). The default and recommended approach is for parties to adopt the Protocol in its entirety, ensuring a stable and predictable procedural baseline for the duration of the arbitration.

A1. Adoption of the TIPS Protocol (Recommended): The parties hereby agree to adopt the TIPS Protocol #01 in its entirety. This adoption includes, without limitation:

  1. The Traffic-Light Framework for confidentiality and data handling (Section 4.4);
  2. All Targeted Transparency Triggers (Section 4.5), including those relating to witness statements, expert reports, and AI-generated demonstratives; and
  3. All Verification and Correction Duties (Section 4.7).

A2. Operational Consistency: The parties acknowledge that by adopting the Protocol in its entirety, they commit to utilizing the standardized forms and notices provided in the Protocol's toolkit (including the Appendix D AI Use Notice) to ensure consistent case management.

A3. Case-Specific Modifications (Optional): The Protocol is designed to be self-executing. However, if the parties expressly agree to modify or exclude specific provisions of the Protocol for this particular case, such changes must be recorded below: (Note: If left blank, the Protocol applies in its entirety as provided in Section A1 above.)

Signed (or recorded in minutes):

Party 1: _______________________ Date: __________

Party 2: _______________________ Date: __________

Appendix B — Model Procedural Order Language

Introduction and Usage Note: This appendix provides standardized language for immediate incorporation into Procedural Order No. 1 or subsequent procedural directions. To ensure the highest degree of procedural efficiency and predictability, the Tribunal recommends the use of the General Adoption Clause (B1). This single provision incorporates the TIPS Protocol in its entirety, establishing a comprehensive and stable framework without unnecessarily lengthening the Procedural Order. The selective provisions (B2–B6) are provided for use only if specific emphasis is required or if a modular adoption is preferred.

B1. Recommended General Adoption Clause (Full Incorporation): The Tribunal records that the parties have agreed (or the Tribunal hereby directs) that TIPS Protocol #01 shall apply to this arbitration in its entirety as the governing framework for the use of Artificial Intelligence tools.

Selective Operative Provisions: The following provisions are intended for use only if the Protocol is not adopted in its entirety via Clause B1, or if the Tribunal wishes to highlight specific duties within the body of a Procedural Order.

B2. Confidentiality and Data Handling: Confidential Case Material shall not be input into any AI Tool except in accordance with the "Traffic-Light" framework set forth in Section 4.4 of the Protocol. Participants shall treat any tool as "Yellow" (requiring prior authorization) if its specific data-handling safeguards are unknown or unverified.

B3. Standardized Disclosure Triggers: Unless otherwise ordered, participants shall provide an AI Use Notice (Appendix D) upon the occurrence of a "Material Use" trigger as defined in Section 4.5 of the Protocol, including the material shaping of witness/expert evidence or reliance on AI-generated sources and demonstratives.

B4. Independent Verification and Correction: Any participant relying on legal citations, authorities, or AI-assisted factual assertions shall independently verify their accuracy prior to submission and shall promptly correct any discovered AI-related material errors or fabrications.

B5. Tribunal Use and Non-Delegation: The Tribunal may utilize AI Tools for administrative and organizational assistance, subject to the principles of non-delegation and the prohibition of undisclosed reliance on extra-record materials as set forth in Section 4.6 of the Protocol.

B6. Measures for Non-Compliance: The Tribunal reserves the authority to implement proportionate measures to address non-compliance with these AI governance provisions, ensuring the ongoing integrity of the proceedings and procedural fairness.

Appendix C — Model CMC Agenda Item

Introduction and Usage Note: This model agenda item is designed to be integrated into the Tribunal's agenda for the first Case Management Conference (CMC). Early engagement with AI-related procedural issues is critical to preventing satellite disputes later in the proceedings. By explicitly referencing the TIPS Protocol, the Tribunal provides the parties with a specific, baseline framework for discussion. To maximize procedural efficiency and predictability, the Tribunal should prioritize the adoption of the Protocol in its entirety as a unified instrument.

C1. CMC Item: Management of Artificial Intelligence (AI) Tools and the TIPS Protocol: The Tribunal invites the parties to address the adoptability of TIPS Protocol #01 as a governing framework for the responsible use of AI tools in these proceedings.

C2. Optional Additional Direction: The Tribunal recommends the adoption of the Protocol in its entirety to establish a consistent and transparent procedural baseline. For the parties' reference, the core operational pillars of the Protocol include:

  1. Adoption and Modification: Whether the parties agree to adopt the TIPS Protocol in its entirety (recommended) or subject to specific case-tailored modifications (as provided for in Appendix A).
  2. Confidentiality Protocols: The implementation of the "Traffic-Light" framework (Section 4.4) for data handling and the identification of any specific tools to be utilized for processing Confidential Case Material.
  3. Transparency and Disclosure: The application of the targeted transparency triggers set forth in Section 4.5, particularly regarding witness statements, expert reports, and AI-generated demonstratives.
  4. Integrity and Verification: Expectations regarding the independent human verification of legal citations and factual assertions to ensure the ongoing integrity of the evidentiary record.
  5. Procedural Mechanics: The methodology for serving AI Use Notices (Appendix D) and the timeline for addressing any objections to AI-assisted submissions.

Appendix D — Notice of AI Use

Introduction and Usage Note: This AI Use Notice is a standardized form designed to fulfill the targeted transparency requirements set forth in Section 4.5 of the Protocol. It should be submitted promptly whenever an AI Tool is utilized in a manner that meets a "Material Use" threshold. The objective of this notice is to provide the Tribunal and the parties with the necessary context to assess the reliability of AI-assisted submissions without compromising work product or professional privilege.

CASE NAME/NUMBER: ________________________________________

SUBMITTING PARTICIPANT: ____________________________________

D1. Declaration of Trigger (Select all that apply): This notice is provided pursuant to the disclosure obligations in Section 4.5 regarding:

  • ☐ Witness or Expert Evidence: AI use has materially shaped the substance of a statement or report (Section 4.5(a)).
  • ☐ AI Output as a "Source": AI-generated content is being relied upon as a primary source of authority (Section 4.5(b)).
  • ☐ AI-Generated Demonstratives/Exhibits: Reconstructed images, synthetic media, or materially generated visuals (Section 4.5(c)).
  • ☐ Reliant Administrative Output: AI translation, transcription, or summarization relied upon as an accurate record for a disputed point (Section 4.5(d)).
  • ☐ Substantive task execution by an autonomous AI agent (Section 4.1(c)(v)).

D2. Description of AI Utilization: Provide a concise, one-sentence description of the general purpose of the AI Tool's use in this instance:

_______________________________________________

D3. Confidentiality and Data Handling: The tool utilized for this task is classified under the Protocol's framework (Section 4.4) as:

  • ☐ Green Tool (Secure/Enterprise)
  • ☐ Yellow Tool (Unclear/Unknown safeguards — prior authorization confirmed)

D4. Verification of Integrity (Mandatory): The submitting participant confirms the following:

  • ☐ All legal citations and authorities have been independently verified by a human reviewer.
  • ☐ All AI-assisted factual assertions have been checked for accuracy against the evidentiary record.

D5. Reliability and Testing Information: If relying on AI as a source or for demonstratives, briefly describe the methodology or underlying sources used to ensure the output's reliability:

Name: __________________________ (Counsel for [Party] / Expert / Witness)

Signed: __________________________

Date: ____________________

Appendix E — AI Final Integrity Declaration

Introduction and Usage Note: While the duty to ensure the integrity of AI use remains persistent throughout the arbitration, this formal Declaration is intended to be submitted as a one-time requirement at a stage that effectively concludes the submission phase of the proceedings (e.g., simultaneously with the Post-Hearing Brief or prior to the formal closure of the proceedings). This Declaration serves as a comprehensive certification that all submissions made by the Participant throughout the arbitration comply with the standards set forth in TIPS Protocol #01.

AI FINAL INTEGRITY DECLARATION

CASE NAME/NUMBER: ________________________________________

SUBMITTING PARTY: _______________________________________

The undersigned, in their capacity as [Counsel for Party / Expert / Witness], hereby represents to the Tribunal that, throughout the course of these proceedings and in respect of all submissions, evidence, and exhibits filed by [Party Name]:

Verification of Authorities: Every legal citation, quotation, and reference to authorities (including cases, statutes, and commentary) contained in the record has been independently verified for existence and accuracy by a human reviewer.
Accuracy of Factual Assertions: All substantive factual assertions assisted by AI Tools have been reviewed against the evidentiary record to ensure accuracy and to prevent the inclusion of fabricated data.
Prohibition of Fabrication: No AI-generated content that is fabricated, hallucinatory, or otherwise non-existent has been included in any submission or exhibit filed in this arbitration.
Data Handling Compliance: All Confidential Case Material utilized in conjunction with AI Tools has been handled in strict accordance with the confidentiality protocols and "Traffic-Light" framework set forth in TIPS Protocol #01.
Disclosure Compliance: All targeted disclosures required under Section 4.5 of the Protocol (including AI Use Notices) have been accurately and timely provided where the "Material Use" threshold was met.

Signed: __________________________ Date: ____________________

Name: ____________________ Role: ___________________________

Annex A — Traffic-Light Tool Classification Criteria

Purpose and Design Rationale

This Annex serves as a practical guide for Participants to assess the security and confidentiality profile of AI Tools. It is designed to help legal professionals—who may not be technical experts—distinguish between secure enterprise environments and high-risk public platforms. Because the AI landscape is dynamic, these criteria are non-exhaustive and should be applied with a view toward the overarching duty of confidentiality.

Due Diligence Requirement

Classification is an active professional responsibility. Participants should not rely solely on general marketing claims. Accurate classification requires a review of the service provider's "Technical Specifications," "Privacy Policy," or "Data Processing Agreement (DPA)." Where documentation is unclear, Participants are expected to make direct inquiries to the service provider to verify compliance with the criteria below.

The Traffic-Light Framework

Green — Permitted

Closed/Private Environment. The tool is "walled off" from the public. Your data stays within your control. Consistent with standard e-discovery or secure cloud storage.

Yellow — Authorised Only

Uncertain/Mixed Environment. The provider's data-handling promises are vague, unverified, or the tool is in "Beta." Requires prior consent or Tribunal approval.

Red — Prohibited

Open/Public Environment. The tool "learns" from your inputs and may share them with other users or the general public. Never upload non-public case information.

Technical Criteria with Ordinary Explanations

Participants should evaluate tools against the following seven criteria. If the status of any criterion is Unknown, the tool must be treated as Yellow by default.

  1. Training on Inputs (The "Learning" Risk)
    Technical: Does the provider utilize user prompts or uploaded files to train its Large Language Models (LLMs)?
    Ordinary Explanation: Does the AI "remember" what you tell it and use that information to answer other people's questions later? Green tools explicitly prohibit this.
  2. Data Retention and Deletion (The "Digital Footprint")
    Technical: What are the persistence policies for inputs and outputs?
    Ordinary Explanation: Does the provider delete your data when you are done, or does it keep a copy on its servers indefinitely? Green tools provide user-controlled deletion.
  3. Access and Administrative Controls (The "Who Can See It" Check)
    Technical: Does the tool support Enterprise-grade security (SSO, MFA, Role-Based Access)?
    Ordinary Explanation: Can you restrict access to only specific team members, or can anyone at the provider's office potentially see your files?
  4. Data Location and Sub-processors (The "Jurisdiction" Factor)
    Technical: Are data regions and third-party subprocessors disclosed?
    Ordinary Explanation: Where are the physical servers located? This affects which privacy laws (like GDPR) protect the data.
  5. Confidentiality Commitments (The "Legal Promise")
    Technical: Are there specific contractual commitments to data confidentiality?
    Ordinary Explanation: Is there a signed agreement or "Terms of Service" that legally prevents the provider from sharing your information?
  6. Auditability (The "Trail" Requirement)
    Technical: Is there a logging mechanism for access and usage?
    Ordinary Explanation: Can you see a history of who used the tool and what they did? This is vital for maintaining an ethical record.
  7. Autonomous Execution and Oversight (The "Agentic" Risk)
    Technical: Does the tool use autonomous agents to execute tasks (e.g., API calls, file modification, or external communication) without granular human approval?
    Ordinary Explanation: Does the AI take actions on your behalf (like a digital assistant) or just provide text? If the AI can execute tasks independently, there must be a "Human-in-the-Loop" mechanism to review and authorize each action before it occurs.

Illustrative Scenarios (Practical Guidance)

  1. Scenario A (Internal Firm AI): A law firm uses a private instance of a model hosted on its own secure servers with "No-Training" clauses. → Classification: Green.
  2. Scenario B (Public Web-Chat): A counsel uses a free version of a popular chatbot found on the open web to summarize a witness statement. → Classification: Red.
  3. Scenario C (Autonomous AI Agent): An expert uses an AI "agent" to automatically crawl external databases and generate a report. The expert cannot verify if the agent follows data-region restrictions. → Classification: Yellow (Default).

Dynamic Nature of AI Technology

The Founding Circle recognizes that AI tool capabilities and vendor policies change rapidly. This Annex shall be interpreted broadly to encompass emerging technologies, including autonomous agents and integrated AI ecosystems. Participants are encouraged to re-verify the classification of their tools periodically as terms of service and technical safeguards are updated.

TIPS Protocol #01 — Full Working Draft v.0.3.3

Includes all provisions, Appendices A–E, and Annex A. Open access — free to download and adopt.

Founding Circle Briefing Report

Comparative Assessment &
Design Rationale

An internal briefing on the drafting rationale and comparative positioning of TIPS Protocol #01 against existing international frameworks.

Section I

Objectives & Design Rationale

Practical application over theoretical frameworks. A practitioner should answer quickly: What is allowed? What is not? When must I disclose? How do I handle confidentiality?

Section II

Primary Technical Advancements

Operational practitioner core, standardised disclosure triggers, the traffic-light framework, allocation of agency responsibility, and judicial safeguards.

Section III

Comparative Analysis

Benchmarked against SVAMC, CIArb, SCC Arbitration Institute, and VIAC — demonstrating additive value to the arbitral ecosystem.

Sections IV–VI

Design Parameters & Contributions

No general disclosure requirement, no mandatory disclosure of raw prompts, tool-agnostic approach, and modular compatibility with institutional rules.

Founding Circle Briefing Report

Drafting rationale and comparative positioning. Internal briefing document.

Resources

Publications & Media

A growing library of videos, commentary, press coverage, and TIPS-authored publications on AI in arbitration and emerging procedural issues.

Videos coming soon — conference talks, panels, and discussions on AI in arbitration.
People

The TIPS Council

TIPS is built by practitioners who believe the profession is best served by those who practice it. The Founding Circle sets direction, shapes each Protocol, and ensures TIPS remains independent, practical, and field-ready.

Michael D. Lee
Founding Chairman

Michael D. Lee

Founder, Michael Lee Chambers · Singapore
Founder, Michael Lee Chambers
Independent International Arbitrator · Mediator · Consultant
Singapore
Founding Circle members will appear here shortly.
Get Involved

Join the Drafting or
Review Committee

TIPS Protocols are shaped by practitioners for practitioners. We welcome applications from qualified professionals in international arbitration — both senior arbitrators and emerging practitioners — to join our drafting and review committees.

Contribution is structured but flexible. Members engage in real and manageable ways: reviewing drafts, voting on proposed directions, flagging issues, or sense-checking whether a protocol would genuinely help users and tribunals.

It requires real input — but it is meaningful work. And there is recognition: credit on published work, opportunities for short articles or commentary, and invitations to participate in talks and events.

Drafting Committee

Actively contributes to the drafting and revision of TIPS Protocol texts and supporting materials.

Review Committee

Provides structured peer review and commentary on draft protocols at key milestones.

Committee Application

Suggest a Protocol

Shape What Comes Next

TIPS Protocols are developed in response to genuine practitioner need. We welcome suggestions for future protocol topics from arbitration practitioners worldwide.

Possible areas for future protocols may include AI use in investor-state arbitration, autonomous AI agents in discovery, AI in expert evidence, and cross-institutional AI governance coordination.

All suggestions are reviewed by the Founding Circle. Proposals with sufficient practitioner support and practical scope will be advanced to the drafting pipeline.

Submit a Suggestion

Contact

Get in Touch

For enquiries about TIPS, committee membership, institutional partnerships, or media.

@
General Enquiries[email protected]
Websitearbitrationtips.com

TIPS is an independent, practitioner-led initiative. It is not affiliated with any arbitral institution, law firm, or commercial entity. All TIPS Protocols are published as open-access instruments — free to download and adopt.

An initiative of Michael Lee Chambers