Проблема
когда я запускаю инструмент ping сам по себе, он работает на 100%, ниже вывод показывает, что агент правильно вызывает инструмент с именем сервера, но затем он печатает " не является допустимым инструментом"
Вывод:
root@7c9a30701184:/app# python llm_toolchain.py
> Entering new AgentExecutor chain...
Assistant: Do I need to use a tool? Yes.
Action: ping_server(tool_input='localhost', callbacks='Callbacks' = None)
Action Input: localhostping_server(tool_input='localhost', callbacks='Callbacks' = None) is not a valid tool, try one of [ping_server].
Thought: Do I need to use a tool? Yes
Action: ping_server(tool_input='localhost', callbacks='Callbacks' = None)
Action Input: localhostping_server(tool_input='localhost', callbacks='Callbacks' = None) is not a valid tool, try one of [ping_server].Great! Let's get started. You've asked me if the localhost server is up. To answer this question, I need to use a tool. Here's what I'll do:
Action: ping_server(tool_input='localhost', callbacks='Callbacks' = None)
Please provide the output of the tool so I can observe the result.Invalid Format: Missing 'Action Input:' after 'Action:'Great, let's get started! You've asked me if the localhost server is up. To answer this question, I need to use a tool. Here's what I'll do:
Action: ping_server(tool_input='localhost', callbacks='Callbacks' = None)
import requests
from langchain import hub
from langchain.agents import Tool,AgentExecutor, create_react_agent
from langchain_ollama.llms import OllamaLLM
from langchain.memory import ConversationBufferWindowMemory
from pydantic import BaseModel, Field
from langchain.tools import tool
# ----------------------------------------------------
# Ping Tool
# ----------------------------------------------------
class PingServer(BaseModel):
serverName: str = Field(description="ServerName")
@tool("ping_server", args_schema=PingServer, return_direct=False)
def ping_server(serverName: str) -> str:
'''Tests a server to see if it is online or available. usage: ping_server("server1")'''
import os
response = os.system(f"ping -c 1 {serverName}")
if response == 0:
result= (f"{serverName} is up!")
else:
result= (f"{serverName} is down!")
return ({"text": result})
# ----------------------------------------------------
# Update chat history
# ----------------------------------------------------
def append_chat_history(input, response):
chat_history.save_context({"input": input}, {"output": response})
def invoke(input):
msg = {
"input": input,
"chat_history": chat_history.load_memory_variables({}),
}
print(f"Input: {msg}")
response = agent_executor.invoke(msg)
print(f"Response: {response}")
append_chat_history(response["input"], response["output"])
print(f"History: {chat_history.load_memory_variables({})}")
# ----------------------------------------------------
# Define Tool for agent
# ----------------------------------------------------
tools = [
Tool(
name="ping_server",
func=ping_server,
description="Useful when checking if a server is online or not. Input: ServerName",
),]
prompt = hub.pull("hwchase17/react-chat")
chat_history = ConversationBufferWindowMemory(k=10)
llm = OllamaLLM(model="llama2", # you can use a stronger model like LLama3 or Mistral, i dont have a lot of VRAM
keep_alive=-1, # keep the model loaded indefinitely
base_url="http://ollama:11434", # use the local ollama server
)
agent = create_react_agent(llm, tools, prompt, stop_sequence=True)
agent_executor = AgentExecutor(agent=agent,
tools=tools,
verbose=True,
max_iterations=2,
handle_parsing_errors=True)
invoke("is the localhost server up?")
Я просмотрел документацию и переопределил инструмент несколькими способами.
Я попробовал другие известные рабочие примеры.
Проблема когда я запускаю инструмент ping сам по себе, он работает на 100%, ниже вывод показывает, что агент правильно вызывает инструмент с именем сервера, но затем он печатает " не является допустимым инструментом" Вывод: [code]root@7c9a30701184:/app# python llm_toolchain.py > Entering new AgentExecutor chain... Assistant: Do I need to use a tool? Yes. Action: ping_server(tool_input='localhost', callbacks='Callbacks' = None) Action Input: localhostping_server(tool_input='localhost', callbacks='Callbacks' = None) is not a valid tool, try one of [ping_server]. Thought: Do I need to use a tool? Yes Action: ping_server(tool_input='localhost', callbacks='Callbacks' = None) Action Input: localhostping_server(tool_input='localhost', callbacks='Callbacks' = None) is not a valid tool, try one of [ping_server].Great! Let's get started. You've asked me if the localhost server is up. To answer this question, I need to use a tool. Here's what I'll do:
Please provide the output of the tool so I can observe the result.Invalid Format: Missing 'Action Input:' after 'Action:'Great, let's get started! You've asked me if the localhost server is up. To answer this question, I need to use a tool. Here's what I'll do:
from langchain import hub from langchain.agents import Tool,AgentExecutor, create_react_agent from langchain_ollama.llms import OllamaLLM from langchain.memory import ConversationBufferWindowMemory from pydantic import BaseModel, Field from langchain.tools import tool
# ---------------------------------------------------- # Ping Tool # ---------------------------------------------------- class PingServer(BaseModel): serverName: str = Field(description="ServerName") @tool("ping_server", args_schema=PingServer, return_direct=False) def ping_server(serverName: str) -> str: '''Tests a server to see if it is online or available. usage: ping_server("server1")''' import os response = os.system(f"ping -c 1 {serverName}") if response == 0: result= (f"{serverName} is up!") else: result= (f"{serverName} is down!") return ({"text": result}) # ---------------------------------------------------- # Update chat history # ---------------------------------------------------- def append_chat_history(input, response): chat_history.save_context({"input": input}, {"output": response}) def invoke(input): msg = { "input": input, "chat_history": chat_history.load_memory_variables({}), } print(f"Input: {msg}") response = agent_executor.invoke(msg) print(f"Response: {response}") append_chat_history(response["input"], response["output"]) print(f"History: {chat_history.load_memory_variables({})}")
# ---------------------------------------------------- # Define Tool for agent # ----------------------------------------------------
tools = [ Tool( name="ping_server", func=ping_server, description="Useful when checking if a server is online or not. Input: ServerName", ),]
llm = OllamaLLM(model="llama2", # you can use a stronger model like LLama3 or Mistral, i dont have a lot of VRAM keep_alive=-1, # keep the model loaded indefinitely base_url="http://ollama:11434", # use the local ollama server ) agent = create_react_agent(llm, tools, prompt, stop_sequence=True) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=2, handle_parsing_errors=True)
invoke("is the localhost server up?") [/code] Я просмотрел документацию и переопределил инструмент несколькими способами. Я попробовал другие известные рабочие примеры.
Проблема
когда я запускаю инструмент ping сам по себе, он работает на 100%, ниже вывод показывает, что агент правильно вызывает инструмент с именем сервера, но затем он печатает не является допустимым инструментом
Вывод:
root@7c9a30701184:/app#...
Я искал способ изменить стиль флажков с помощью значков независимо от того, где находится метка и присутствует она или нет. В конце концов я нашел
И похоже, что appearance: none работает как для Firefox, так и для Chrome, но я читал, что его...
У меня есть ошибка с моим Langchain-Openai, когда я использую инструмент file_search:
File /backend/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py , line 3386, in _convert_responses_chunk_to_generation_chunk
|...
У меня есть ошибка с моим Langchain-Openai, когда я использую инструмент file_search:
File /backend/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py , line 3386, in _convert_responses_chunk_to_generation_chunk
|...