Я попробовал пример, упомянутый в документации:
Код: Выделить всё
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
retriever = ... # Your retriever
llm = ChatOpenAI()
system_prompt = (
"Use the given context to answer the question. "
"If you don't know the answer, say you don't know. "
"Use three sentence maximum and keep the answer concise. "
"Context: {context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, prompt)
chain = create_retrieval_chain(retriever, question_answer_chain)
chain.invoke({"input": query})
Я использую версию langchain 1.0.3.
Подробнее здесь: https://stackoverflow.com/questions/798 ... eprecation
Мобильная версия