DeepResearchTool
autogen.tools.experimental.DeepResearchTool #
Bases: Tool
A tool that delegates a web research task to the subteams of agents.
Initialize the DeepResearchTool.
PARAMETER | DESCRIPTION |
---|---|
llm_config | The LLM configuration. |
max_web_steps | The maximum number of web steps. Defaults to 30. TYPE: |
Source code in autogen/tools/experimental/deep_research/deep_research.py
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
|
tool_schema property
#
Get the schema for the tool.
This is the preferred way of handling function calls with OpeaAI and compatible frameworks.
function_schema property
#
Get the schema for the function.
This is the old way of handling function calls with OpenAI and compatible frameworks. It is provided for backward compatibility.
realtime_tool_schema property
#
Get the schema for the tool.
This is the preferred way of handling function calls with OpeaAI and compatible frameworks.
ANSWER_CONFIRMED_PREFIX class-attribute
instance-attribute
#
summarizer_agent instance-attribute
#
summarizer_agent = ConversableAgent(name='SummarizerAgent', system_message="You are an agent with a task of answering the question provided by the user.First you need to split the question into subquestions by calling the 'split_question_and_answer_subquestions' method.Then you need to sintesize the answers the original question by combining the answers to the subquestions.", is_termination_msg=lambda x: get('content', '') and startswith(ANSWER_CONFIRMED_PREFIX), llm_config=llm_config, human_input_mode='NEVER')
critic_agent instance-attribute
#
critic_agent = ConversableAgent(name='CriticAgent', system_message="You are a critic agent responsible for evaluating the answer provided by the summarizer agent.\nYour task is to assess the quality of the answer based on its coherence, relevance, and completeness.\nProvide constructive feedback on how the answer can be improved.\nIf the answer is satisfactory, call the 'confirm_answer' method to end the task.\n", is_termination_msg=lambda x: get('content', '') and startswith(ANSWER_CONFIRMED_PREFIX), llm_config=llm_config, human_input_mode='NEVER')
SUBQUESTIONS_ANSWER_PREFIX class-attribute
instance-attribute
#
register_for_llm #
Registers the tool for use with a ConversableAgent's language model (LLM).
This method registers the tool so that it can be invoked by the agent during interactions with the language model.
PARAMETER | DESCRIPTION |
---|---|
agent | The agent to which the tool will be registered. TYPE: |
Source code in autogen/tools/tool.py
register_for_execution #
Registers the tool for direct execution by a ConversableAgent.
This method registers the tool so that it can be executed by the agent, typically outside of the context of an LLM interaction.
PARAMETER | DESCRIPTION |
---|---|
agent | The agent to which the tool will be registered. TYPE: |
Source code in autogen/tools/tool.py
register_tool #
Register a tool to be both proposed and executed by an agent.
Equivalent to calling both register_for_llm
and register_for_execution
with the same agent.
Note: This will not make the agent recommend and execute the call in the one step. If the agent recommends the tool, it will need to be the next agent to speak in order to execute the tool.
PARAMETER | DESCRIPTION |
---|---|
agent | The agent to which the tool will be registered. TYPE: |