No items found.
Artificial Intelligence
Security

4 Key Considerations When Evaluating AI For Legal

Andrew Heffernan
VP, Engineering
Artificial Intelligence
Security

4 Key Considerations When Evaluating AI For Legal

Andrew Heffernan
VP, Engineering

Is the legal industry ready for generative artificial intelligence? In last year's 2023 Litify State of AI in Legal Report, 61% of respondents felt the answer was no, with nearly half of them citing security, privacy, and ethical concerns as their biggest barriers to implementation. However, ready or not, artificial intelligence is here, and many are still evaluating how to use it safely.  

Given the sensitive nature of legal information, it comes as no surprise that the industry is cautious about how AI platforms collect, store, and manage data. We’ve already seen many headlines about industry professionals who weren’t aware of AI’s potential misuse of their data and its limitations in producing accurate responses to their prompts.

For those who are (safely!) taking advantage of the technology, they’re reporting efficiencies, streamlined operations, and an ability to deliver better service with time back in their days. As you continue evaluating and using AI tools at your organization, here are some key security considerations to keep in mind to help ensure their safe and responsible adoption.

If you need a refresher on some of the terms and concepts used below, check out our Glossary of AI Terms For Legal!

1. What AI provider(s) your legaltech vendor is working with?

The terms used to describe artificial intelligence and large language models can be confusing. And there’s even a level of secrecy — or a “black box” — surrounding the inner workings of AI models and systems. Given these intricacies, legal technology companies should aim to be as transparent as possible about their AI capabilities and the providers they may be partnering with to deliver them.

In the 2024 Litify State of AI in Legal Report, 56% of respondents also stated it was important to know the AI model a solution was using. As AI supports increasingly complex work at your organization, it will become even more important for your business to understand the specific technologies involved and their stance on data privacy and security — knowing not all AI technologies take the same approach.

At Litify, we’re collaborating with software leaders Amazon Web Services (AWS) and Anthropic to support the development of responsible AI solutions for the legal industry. This approach was forged from the need to develop AI capabilities using appropriate precautions, given the sensitive nature of legal information and the need for enhanced data security.

If you know the AI provider(s) your vendor is working with, you can conduct your own research and evaluation to ensure they meet your standards for security, privacy, and data governance.

2. Is your vendor using an open or closed AI model?

If your vendor is using an open AI model, it will continue to learn and evolve using an open source of data, usually from the broader internet. This means that the data you share with the open AI model may be retained and used in future responses generated by the model. If your vendor is using a closed AI model, it doesn’t have access to an open source of data, meaning the data you share is never available to external parties.

For example, if your vendor uses an open AI model to process your clients’ medical information, that specific data may be used to inform a generated response when another user of the open AI model asks about trends in knee injuries. You’re giving the open AI model additional data to use in its training to inform this trend, and there’s a risk that it may be regurgitated back to another user. In this scenario, if your vendor uses a closed AI model to process your clients’ medical information, there is no risk of exposure because the model is closed to other users and open sources of data.

Because an open AI model continues to evolve from users’ inputs, your own responses from the model may change. As all the data that gets sent to the model is stored and processed within the model, it means the model itself will continue to evolve over time and may lead to inconsistent responses. In a closed AI model, only your organization is interacting with the model, which ensures its consistency.

At Litify, we’ve chosen to build our AI capabilities using a closed AI model, never exposing sensitive data to the broader internet or other open AI models. This provides added privacy and security to our clients as well as consistency and accuracy in how our model will continue to perform over time.

3. Is your vendor contributing your data to an AI model?

Many popular AI models in the market have been pre-trained on trillions of data parameters from across the internet. Some legal technology providers are taking one of these “off-the-shelf” models and fine-tuning it with additional legal data. This may mean that the vendor is contributing (and retaining) your organization’s information — or your clients’ information — to the AI model to help it perform better.

At Litify, we’ve opted not to train our AI model with client data. Instead, we’ve focused on tuning our prompts by programmatically asking questions of the AI model to help it perform better. This means your data is never retained by the model. With prompt engineering, there are several ways we can improve the model’s accuracy, such as tweaking the specific questions we’re asking or breaking down a document we want it to analyze into smaller pieces. Our team has already spent over 1,000 hours developing our solution in this way.

For example, we could send the model a series of documents and ask it to identify if “yes, this is a medical bill” or “no, this is not a medical bill.” Over time, the prompts become increasingly specific to ask the model to identify the hospital that issued the medical bill or the treatment received.

Given the sensitive nature of legal information and the industry’s concerns over the security risks of AI and its potential misuse of data, we didn’t want to ask our clients to contribute their data in this way. Instead, through prompt engineering, we can guide the AI model toward a more meaningful and accurate response by designing and optimizing new instructions or statements that improve its context over time.

4. Is your vendor using single-tenant or multi-tenant architecture?

While this question may not be specific to artificial intelligence, it’s still an important consideration when it comes to evaluating AI for your organization.

In the context of AI technology, when an open AI model is used alongside a multi-tenant architecture, it inherently means that multiple organizations’ input data — and potentially the public users’ input data — is being fed to the same model. With a closed AI model and single-tenant architecture, you can ensure your data is only exposed to your organization.

Litify's AI Framework

At Litify, we’ve built our existing platform using single-tenant architecture, and each client has a unique cloud environment that’s walled off from other organizations. We’re extending this design and investing in single-tenant architecture for our AI capabilities. As our clients gain access to Litify AI capabilities, the information they share with the AI model will remain inaccessible to other clients and external users, and won’t be retained by the model.

The takeaway

When adding artificial intelligence to your organization, you need to know you’re working with a platform and a business partner you can trust. And as a legal professional, you may want to consider AI tools that have been built with the legal industry — and your clients’ sensitive and confidential information — in mind.

At Litify, we’ve deeply evaluated the AI landscape to ensure a strong technology foundation that will protect this sensitive information. We’ve announced our collaboration with AWS and Anthropic to deliver a responsible approach to our AI capabilities. And later this summer, we’ll release our AI roadmap with information on how clients can gain access.

If you’re already considering an AI tool for your organization, we encourage you to keep these questions in mind:

  • What AI provider(s) is the vendor working with to deliver their AI capabilities?
  • Is the vendor using an open AI model or a closed AI model?
  • Is the vendor contributing your data to the model?
  • Is the vendor using single-tenant or multi-tenant architecture?

Get more insights in the 2024 Litify State of AI in Legal Report to see how your peers are approaching the use of AI and use the findings to inform your own AI strategy.

Andrew Heffernan
VP, Engineering
About the author
Andrew Heffernan is the VP of Engineering at Litify. He’s a seasoned engineering leader with diverse work experience in the technology industry who enjoys growing and maturing engineering teams.
author social link