Benefits of the LLM Mesh, Part 2: Enforcing a Secure API Gateway

Dataiku Product, Scaling AI Christina Hsiao

With Generative AI top of mind for many companies, IT leaders have been tasked with figuring out how to make the most of this new technology through pilot use cases and experimentation. 

Companies that tie themselves to one model or infrastructure dependency risk creating technical debt that will be difficult to manage down the road. So, given the speed of evolution, companies want to ensure that they can maintain flexibility should the model du jour change. 

In the previous article in this series, we discussed the prudence of decoupling LLM applications from the service layer to ensure agility and deliver solutions that are resilient to change. In short, by removing hard-coded routing and technical dependencies between applications and AI services, organizations can:

  1. Reduce operational risk
  2. More easily test and evaluate different models during the design phase 
  3. More nimbly adapt to new innovations and models once in production

To keep up with the pace of change in LLMs, companies should also consider how they can empower data teams to quickly access approved models for experimentation. But, that agility may seem to come at a cost. By giving users the ability to choose models, admins may feel like they’re losing control over both cost and visibility. With Dataiku’s LLM Mesh, now IT teams can empower data teams to explore the latest in LLMs, all while maintaining control. 

Enter the LLM Mesh

Dataiku’s LLM Mesh acts as a secure API gateway; an intermediate layer to manage and route requests between applications and their underlying services so as not to create the hard-coded dependencies that lead to downstream technical debt.

With a centralized place for IT administrators to manage and control access to approved LLMs and AI services, enterprises can easily connect to AI services from commercial LLM providers such as Azure OpenAI or OpenAI, or utilize locally-hosted models, whether privately developed or downloaded from Hugging Face. 

Dataiku features built-in connections to many leading LLM partners. 

Dataiku features built-in connections to many leading LLM partners. 

Similar to regular database or data storage connections in Dataiku, the LLM API gateway reduces risk by securing API keys and tokens and enabling administrators to control access to these specialized models. This allows IT teams to benefit in a couple of key ways: 

  • Easily access and swap out approved models in applications. For end users, perhaps most valuable is how the LLM Mesh makes it easy to access and swap out approved models in applications in an abstracted way, whether using code or Dataiku’s built-in visual tooling. 
  • Remove the burden of configuring and maintaining connections. For data scientists and ML engineers, removing the burden of configuring and maintaining secure connections to these rapidly-evolving AI services is a big benefit.

An example connection in the administration settingsAn example connection in the administration settings

In the above example for an OpenAI connection, notice how administrators can choose from multiple generations and flavors of GPT models and, armed with the provided cost and speed info beside each, decide which models to make available to teams. Access to the models is securely controlled by group-based permissions.

Access dozens of the most popular open source models available in the Hugging Face model zoo that have been fine-tuned for specific tasks such as chatbots, text classification, summarization, or sentiment analysis.

Models available out of the box in a local Hugging Face connection with Dataiku

Models available out of the box in a local Hugging Face connection with Dataiku

The LLM Mesh even includes connections to models hosted in third-party model hubs, such as AWS Bedrock.

Easily connect to third-party model hubs like AWS Bedrock.

Easily connect to third-party model hubs like AWS Bedrock.

Maintaining Visibility and Tracking With LLMs 

Finally, let’s talk about transparency and tracking. Standard IT practices dictate that organizations need to maintain a complete trail of the queries run against their infrastructure. This is both to manage performance (e.g., identifying the culprit behind that inefficient and very expensive LLM prompt) and to ensure security (i.e., knowing who is querying which data and for which reasons). The same needs translate over to LLMs. 

A fully auditable log of who is using which LLM and which service for which purpose allows for both cost tracking (and internal charge-backs), as well as the full traceability of both requests and responses to these sometimes unpredictable models. We’ll dig more into this topic in later installments in this blog series.

As you can see, with the built-in controls, secured keys and tokens, as well as cost estimates that are provided when selecting LLMs, admins have full control over LLMOps while giving data teams the flexibility they need. With the LLM Mesh, you can experiment today while providing the ability to quickly adapt should your data teams want to change in the future, all without creating technical debt. 

You May Also Like

How to Build Tailored Enterprise Chatbots at Scale

Read More

Operationalizing Data Quality: The Key to Successful Modern Analytics

Read More

Alteryx to Dataiku: AutoML

Read More

Conquering the Data Deluge Through Streamlined Data Access

Read More