top of page
Writer's pictureMichael DeBellis

Integrating Large Language Models and Knowledge Graphs to Implement Retrieval Augmented Generation (RAG)

Two of the biggest issues with using Large Language Models (LLMs) for mission critical domains such as medicine are hallucinations and black-box reasoning. One way to solve these issues is with an architecture known as Retrieval Augmented Generation (RAG). A RAG architecture replaces the broad but shallow knowledge of an LLM with a deep but narrow knowledge base that is focused on a specific domain. When using a RAG architecture, the system provides a level of certainty regarding the answer based on the semantic distance between the question and the relevant documents in the knowledge base. Both the question and the documents are modeled as vectors and the distance is computed as the semantic distance between the vectors. If no text is found in the knowledge base that is above the required threshold, the RAG system provides a predefined answer that it can't answer the question. This prevents hallucinations. In addition, if there is one or more documents that are within the required semantic distance, those documents are returned with the answer. This eliminates the problem of black-box reasoning.


Typically, RAG systems are implemented using relational databases. In our project, we implemented a RAG system using a knowledge graph that utilized new technology from AllegroGraph to integrate with ChatGPT. This provides the user with many additional capabilities to further explore the knowledge graph and find additional information. This work is described in a paper we wrote for the journal Applied Ontology In addition, I developed a presentation describing our work for a workshop on LLM and ontology integration at FOIS 2024. The recording below was created for the workshop and describes the project to date (July 2024).




341 views0 comments
  • facebook
  • linkedin

©2019 by Michael DeBellis. Proudly created with Wix.com

bottom of page