Skip to main content
search

Deep Dive Into Generative AI & LLM
with Ben Lorica of Gradient Flow

Ben Lorica of Gradient Flow is a working data scientist, advisor to several successful AI companies, and chair of prestigious AI conferences. He is among the best and brightest in the Large Language Model (LLM) and Generative AI space. At the Senzing 2023 Global User Conference, Ben discussed trends in generative AI and LLMs and how entity resolution is a critical enabler for more accurate and useful AI applications.

In this video, Ben crystallizes some of the best practices and most significant trends in the LLM space. He discusses Stanford’s first formal academic study on the impact of LLMs, which showed a 14% increase in productivity for 5,000 customer support agents at a Fortune 500 company.

Leftquote generative ai & llm in depth - blog

First: Get started – and get started NOW.  These LLM Apps and LLM Models are AMAZING… Data quality and metadata are crucial for RAG, and using Senzing can significantly enhance your RAG applications.

– BEN LORICA

Open-Source LLMs: The Lowdown

Ben’s talk goes deep into open-source LLMs because he thinks they aren’t getting the attention they deserve today. He advocates that open-source LLMs from companies including Meta and Mistral and LLM orchestration from companies like Haystack hold great potential and value for enterprises developing AI applications. Additionally, Ben shares how to work with them, and which open-source LLM he recommends and uses.

In this video, Ben covers:

• Commercial LLMs like OpenAI’s Chat GPT, Google’s Gemini and Anthropic’s Claude.
• Why using retrieval augmented generation (RAG) to build custom LLMs is so popular.
• How applications with fleets of custom LLMs are currently being deployed.
• How Senzing® entity resolution is the secret ingredient to successful RAG apps.

Senzing entity resolution quoteTo me, the end state [of generative AI apps] would be something like Senzing. What do I mean by that? I want an app that absorbs data in real time and produces responses within milliseconds. I also want an application that has all the enterprise features that I need: security, privacy, encryption. The other thing about Senzing that’s quite cool is it’s principle based… It can take in new data, and that new data is reflected in the application without any need for retraining or tuning. Long story short: I want my generative AI app to be like Senzing when it matures and grows up.

– BEN LORICA, GRADIENT FLOW

For more details on how Senzing entity resolution enables successful RAG implementations, read Ben’s Gradient Flow article Entity Resolution: Insights and Implications for AI Applications. The article covers nine pillars of advanced entity resolution systems that are essential for enterprise-level, dependable AI applications: scalability, real-time processing and inference, explainability, privacy features, principle based, sequence neutrality, support for global languages, robustness and auditable.

Generative AI & LLM Video Highlights

Here are some quick links to questions Ben addresses in his deep dive into generative AI and LLMs:

• 4:41 Why consider using open-source LLMs?
• 5:33 What are the difficulties of deploying open-source LLM models?
• 9:31 What’s the best way to fine-tune your LLM model?
• 12:46 When do you need to deploy a fleet of custom LLMs?
• 15:07 When would you augment your LLM with external data?
• 16:25 Why use retrieval augmented generation (RAG) for your LLM?
• 21:34 Why are data quality, metadata and chunking important for RAG?
• 22:10 How does Senzing help with RAG application development?
• 23:08 How do you use RAG with a custom fleet of LLMs?
• 25:52 What are the biggest generative AI implementation challenges?

If you’re serious about getting your generative AI-LLM strategy right, schedule a call to meet with one of our entity resolution experts. We’ll discuss how Senzing entity resolution can be a critical enabler for the successful development and deployment of your enterprise AI applications and LLMs.

Close Menu