Take all of the documentation and throw it into the database and we’re done with AI, right? Not so fast.
Large language models (LLMs) don’t know everything and retrieval-augmented generation, or RAG, fills in the knowledge gaps with just-in-time retrieval of data from a database. Technical limitations and challenges loom large here, but there are plenty of difficulties brought over from the world of humans into the world of LLMs.
Come on a RAG journey with me as I recount some of the roadblocks my team faced as we built a product with RAG as a core component. Learn about the available technologies, how to build out a RAG stack, and how to avoid bringing human complexity into an already complex technical system.
Presentation
Saturday, October 4th (time TBD)
Lil Tex