AI in Medical Malpractice Litigation
The Do's and Don'ts of Using AI in Medical Malpractice Litigation
By Nicholas Tam, Esq.
Artificial intelligence (AI) is rapidly reshaping many industries, and the legal profession is no exception. From streamlining legal research to automating routine tasks, AI can be a valuable tool for attorneys, particularly in complex areas like medical malpractice litigation. However, like any powerful tool, AI must be used in conjunction with attorney oversight and its use must be disclosed when required to comply with a Court’s rules. Care must also be taken to avoid running afoul of legal ethics guidelines.
As Courts begin grappling with AI's implications, legal professionals must strike a balance between innovation and professional responsibility. Below are some key do's and don'ts for using AI in a legal setting.
Do: Use AI to Enhance Legal Research
AI can be a powerful partner in your legal research toolbox. Platforms such as Westlaw Edge and Lexis+ now incorporate machine learning to help lawyers identify relevant case law, track judicial trends, and highlight key arguments and rulings.
For a medical malpractice lawyer, this means more efficient sifting through volumes of complex case law regarding standards of care, vicarious liability, proximate cause, and other relevant legal doctrines. AI can augment your legal research by summarizing information from various databases and flagging potentially helpful precedent.
But remember: AI should enhance your legal research, not replace your judgment. Always validate the search output with your legal analysis and thoroughly review the cases and statutes yourself.
Don't: Surprise the Court with an "AI Appearance"
Earlier this year, a man representing himself pro se in the Appellate Division, First Department, used a pre-recorded AI-generated video to “appear” before the Court. The Court, however, was not provided with prior notice of the surrounding circumstances when he made his application to submit a video to be played before the Court.
When the video was played and it became apparent that the video featured an AI-generated avatar, the Court stopped the video. The Court stated that it had been misled and reprimanded the pro se litigant but ultimately allowed him to present his arguments – without using the AI avatar.
The story, which made headlines via NBC New York and was widely shared on social media, underscores a growing concern among the Courts about how unregulated AI use might disrupt courtroom decorum and integrity, particularly when the Court is not fully apprised of the use of AI.
Do: Disclose AI Use When Required
Some Federal and State courts have issued rules around the use of AI, particularly regarding disclosure. Certain jurisdictions now require that attorneys disclose whether they used AI tools to draft documents or analyze legal data. Failing to disclose AI usage when mandated can lead to ethical issues, allegations of misleading the court, and potential sanctions.
Don't: Overlook Data Privacy and Confidentiality
Medical malpractice cases involve sensitive health information protected under HIPAA and other privacy laws. Using cloud-based or third-party AI tools can pose significant data privacy risks if not properly vetted. At a minimum, such platforms must have strong encryption protocols and comply with the relevant data protection standards. Always consult your IT and compliance teams to vet AI vendors before considering whether to use such AI tools.
Conclusion
AI in the legal field has the potential to improve efficiency, reduce costs, and automate routine tasks. It is not, however, a replacement for legal expertise, particularly in the complex field of medical malpractice law.
If you have questions about the use of artificial intelligence in legal work or other questions relating to medical malpractice law, please contact Nicholas Tam, Esq. (ntam@sacslaw.com).