Large language models (LLMs) can give us inaccurate or out of date answers. Suppose we can provide additional contexts with our query, then it will be much easier for a LLM to answer questions — because we’ve already included the answer in the context it only needs to extract and summarise the context text.
Share this post
Retrieval-augmented generation (RAG) in…
Share this post
Large language models (LLMs) can give us inaccurate or out of date answers. Suppose we can provide additional contexts with our query, then it will be much easier for a LLM to answer questions — because we’ve already included the answer in the context it only needs to extract and summarise the context text.