The scenario is as follows: An Australian driver is facing allegations of using a mobile phone while driving, a violation of Road Rules 2014 (NSW) Reg 300. Their defense is that it was not a phone in their hand, but rather a misidentified juice box. Representing them is Jeanette Merjane, a senior associate at law firm Lander & Rogers. Additionally, an AI trained on legal documents is also representing them.
During a session at the University of Technology, Sydney titled “Can AI Win a Court Case?” the audience is asked to compare the arguments presented by Merjane with those generated by NexLaw’s Legal AI Trial Copilot. While Merjane has prepared her arguments conventionally, Copilot will generate a defense live, to be read by a volunteer as if they were representing themselves in court.
Before the showdown, around two-thirds of the audience believe that Merjane will present a more convincing argument, though some believe that the legal AI tool might surprise everyone.
The legal profession is seen as an area where AI adoption could be beneficial, given the long hours, extensive research, and complex jargon involved. AI algorithms could help automate tasks, lower costs, and make the legal system more accessible. Legal AI is already changing the practice of law globally, with examples such as Luminance automating contract negotiations and Brazilian lawmakers using OpenAI’s ChatGPT to write legislation.
However, the use of AI in law is not without challenges. Instances like DoNotPay, a company offering online legal services and chatbots, have faced legal scrutiny for potentially engaging in unauthorized practice of law. Despite the potential of AI in law, issues around qualifications and accountability remain. AI chatbots like LawConnect aim to answer legal questions, combining AI responses with verifications from qualified human lawyers. The chatbot utilizes OpenAI’s API and is trained on publicly available information from the internet. Beck emphasized that lawyers review and verify the answers provided by the AI to ensure accuracy. LawConnect’s website offers personalized reports created by AI upon describing your legal issue, with the option to have it reviewed by lawyers.
LawConnect is being globally launched across all legal areas, using OpenAI’s AI models for translation when necessary. The company is addressing the challenges associated with this expansion. The website includes a disclaimer stating that the content is for informational purposes only and should not be a substitute for legal advice.
While AI chatbots offer instant answers, issues like hallucinations limit their effectiveness in improving accessibility to the legal system. Even experienced lawyers have been misled by false AI-generated information, leading to fines and criticism from judges. The misuse of generative AI tools has raised concerns about their reliability and potential consequences in the legal field.
Despite these challenges, some lawyers continue to rely on AI tools, with varying levels of understanding and awareness. Specialized AI systems like LawConnect and NexLaw’s Copilot may offer more accurate information compared to general tools like ChatGPT. There is ongoing debate about the environmental impact and reliability of legal AI tools, but there is potential for improvement in the future. Similarly, the development of generative AI tools to assist organizations like Legal Aid and community legal centers could potentially help serve a larger number of people, improving accessibility.
The battle between NexLaw’s Copilot and Merjane at SXSW Sydney showcased the differences between human-crafted arguments and those generated by AI. Despite Copilot’s errors in citing legislation and focusing on irrelevant details, Merjane presented a stronger defense with evidence and quick responses.
NexLaw’s Legal AI Trial Copilot aims to complement human legal professionals rather than replace them. Professor David Lindsay highlighted the potential of AI in improving access to justice, but emphasized the importance of humans and AI working together.
Legal AI tools raise ethical concerns around liability and confidentiality. Tronson pointed out confidentiality issues related to information input and training algorithms. Building understanding of AI tools among legal practitioners and the guidance of professional bodies are crucial in addressing these concerns.
The experiment at SXSW Sydney highlighted the limitations of legal AI chatbots compared to human lawyers. While AI tools are meant to assist rather than replace legal professionals, the potential for errors and accountability issues remains a concern. It is essential for lawyers to exercise their judgment and critical thinking when using AI tools.