Blog - 6 Jun 2023
2 minute read

The truth about training OpenAI on your data

Blog

Understanding how to train OpenAI on a specific subset of data is one of the most misunderstood concepts. In this blog, Murray explains what this is called, how it’s achieved and provides insight into a generic pattern of one of the ways Ascent achieves this.

OpenAI, Open AI, Data, Grounding, integration, AI, models, AI model,

True

True

False

2023-06-05T23:00:00Z

The truth about training OpenAI on your data.

Murray Foxcroft, Chief Technology Officer, explains how we “train” AI and large language models by incorporating grounding data into the prompt flow, creating a virtuous circle of relevant and enriched outputs.

article

Digital & App Innovation

The truth about training OpenAI on your data - man standing looking out of window at cityscape during a lightening storm

//images.ctfassets.net/k26sw1bgepr3/2bryzjNZOflDh7JUeZiWKH/26e8c467e98a321e2f5874938bcb32b6/lightening_bolt_MF_blog.jpeg

One of the most common questions I get asked is “How do I train OpenAI on my data?”. The short answer is, you don’t. Yes, there are some aspects like fine tuning which can help influence a model, but the most effective approach is to perform something called “grounding”, which connects AI systems to the physical world. This enables AI models to acquire knowledge through perception, interaction, and exploration, bridging the gap between virtual and real-world understanding.

To achieve this, a highly effective approach is to provide the AI model with grounding data. By equipping the model with relevant and up-to-date information, we can enhance its ability to deliver reliable answers.

This is best described using a generic example whereby the grounding is built in to the prompt flow (interactions) with the model, designing your conversation with the large language model (LLM) to retrieve related context to the question being posed. Grounding data could come from an existing system like a SaaS based CRM, an enterprise search engine or a more general purpose internet search.

This context can be found by lookup using key words in the original question and then the results passed to the LLM along with the original question in order to provide the relevant context for the model to use to formulate an answer.

Further, as subsequent questions are posed, so the historical context and conversation history is accumulated and passed back in to the LLM to continue to expand the context, further the conversation and refine the answers being provided.

Below is a simplified diagram of how we use grounding to supercharge customer business applications and chatbots to produce custom “Copilots”. This, plus effective prompt engineering, can help reduce or eliminate OpenAI hallucinations or alternatively be used to provided a grounded, cited answer by default, then offer up a truly generated response where no grounded context is available.

The future potential of OpenAI grounding holds tremendous promise, shaping a world where AI seamlessly integrates into various sectors, ultimately benefiting humanity.

Murray Foxcroft

CTO

Ascent

Murray Foxcroft is a highly experienced and visionary technology leader with a deep understanding of the intersection between technology, business, and innovation. As the Chief Technology Officer at Ascent, Murray leverages his extensive expertise in strategising, architecting, developing, and delivering projects across a diverse range of organisations and industries to continuously improve Ascent’s outlook on the latest and greatest technologies for its customers.

genericSection

[Think tank] Let’s get started - Get In Touch

Let’s get started.

We help customers build game-changing products, deliver pivotal data and software projects and build strong internal teams. Got a challenge in mind? We’re ready when you are.

Get In Touch

False