Explore considerations that help legal aid organizations plan ahead and avoid challenges when integrating AI tools into their services or internal procedures.
Overview
The American Bar Association Standard for the Provision of Civil Legal Aid 4.10 encourages legal aid organizations to work toward developing
standards around the use of artificial intelligence (AI). So what does this mean for LSC grantees? The session “Toward Best Practices in Generative AI in Legal Aid” at the LSC 2024 Innovations in Technology Conference, presented by Pro Bono Net, Michigan Legal Help, and the Stetson University College of Law, outlines practical considerations that are unique to those seeking legal assistance or representing themselves. These considerations can help legal aid organizations plan ahead and avoid challenges when integrating AI tools into their services or internal procedures.
Key Definitions
-
Machine Learning
-
Data Privacy
-
Third-Party AI
-
Bias
The ability for AI to learn from and improve data.
Safeguarding personal information from unauthorized access, use, or sharing.
AI platforms like Chat GPT, Anthropic, or Copilot that produce content like text, images, or code.
The potential for AI to produce unfair or skewed results due to the data or human involvement used to train them.
Explore considerations that help legal aid organizations plan ahead and avoid challenges when integrating AI tools into their services or internal procedures.
Legal aid organizations can use AI platforms to reach two audiences: internal and external. Internal audiences include administrative staff, management, or lawyers using AI to conduct research, summarize, or edit documents. External audiences include legal aid clients asking questions or supplying information to AI-powered chatbots.
Third-party AI platforms can create custom data servers for clients that shield data from being used for machine learning. Consider the following before integrating client-facing AI tools: Data Privacy: What are the server’s data retention, deletion, and privacy policies? Data Retention: How long after the AI generates responses does it keep the data?
Because legal aid organizations serve diverse populations, those integrating generative AI into internal or external systems should be aware of the biases that affect the AI’s responses. Representation bias in machine learning leads to over- or under representation of certain populations. Language and cultural biases impact the AI’s understanding of diverse clients. AI cannot always detect implicit biases, especially in text-generation AI
Chatbots often struggle to capture the full context and nuance of legal aid cases because they are trained on limited data sets. Legal aid organizations should use large, diverse data sets to train AI to account for the larger context and circumstances of these cases. Additionally, they should consider augmenting, rather than replacing, human legal expertise with chatbots to collect supplemental information.
Ensuring high-quality and reliable AI responses for legal aid requires testing the system. Here are some ways your organization can input information into the AI to monitor and train the types of responses it generates. Standardized inputs like anonymized legal questions, generic or specific frequently asked questions, or structured questions and answers from statewide legal aid sites. Unexpected inputs use unusual questions to see what the AI generates when users input odd information
Explore additional resources regarding AI’s role in legal aid.