AgentBuilder
autogen.agentchat.contrib.captainagent.AgentBuilder #
AgentBuilder(config_file_or_env='OAI_CONFIG_LIST', config_file_location='', builder_model=[], agent_model=[], builder_model_tags=[], agent_model_tags=[], max_agents=5)
AgentBuilder can help user build an automatic task solving process powered by multi-agent system. Specifically, our building pipeline includes initialize and build.
(These APIs are experimental and may change in the future.)
PARAMETER | DESCRIPTION |
---|---|
config_file_or_env | Path to the config file or name of the environment variable containing the OpenAI API configurations. Defaults to "OAI_CONFIG_LIST". |
config_file_location | Location of the config file if not in the current directory. Defaults to "". |
builder_model | Model identifier(s) to use as the builder/manager model that coordinates agent creation. Can be a string or list of strings. Filters the config list to match these models. Defaults to []. |
agent_model | Model identifier(s) to use for the generated participant agents. Can be a string or list of strings. Defaults to []. |
builder_model_tags | Tags to filter which models from the config can be used as builder models. Defaults to []. |
agent_model_tags | Tags to filter which models from the config can be used as agent models. Defaults to []. |
max_agents | Maximum number of agents to create for each task. Defaults to 5. |
Source code in autogen/agentchat/contrib/captainagent/agent_builder.py
DEFAULT_PROXY_AUTO_REPLY class-attribute
instance-attribute
#
DEFAULT_PROXY_AUTO_REPLY = 'There is no code from the last 1 message for me to execute. Group chat manager should let other participants to continue the conversation. If the group chat manager want to end the conversation, you should let other participant reply me only with "TERMINATE"'
GROUP_CHAT_DESCRIPTION class-attribute
instance-attribute
#
GROUP_CHAT_DESCRIPTION = ' # Group chat instruction\nYou are now working in a group chat with different expert and a group chat manager.\nYou should refer to the previous message from other participant members or yourself, follow their topic and reply to them.\n\n**Your role is**: {name}\nGroup chat members: {members}{user_proxy_desc}\n\nWhen the task is complete and the result has been carefully verified, after obtaining agreement from the other members, you can end the conversation by replying only with "TERMINATE".\n\n# Your profile\n{sys_msg}\n'
DEFAULT_DESCRIPTION class-attribute
instance-attribute
#
DEFAULT_DESCRIPTION = "## Your role\n[Complete this part with expert's name and skill description]\n\n## Task and skill instructions\n- [Complete this part with task description]\n- [Complete this part with skill description]\n- [(Optional) Complete this part with other information]\n"
CODING_AND_TASK_SKILL_INSTRUCTION class-attribute
instance-attribute
#
CODING_AND_TASK_SKILL_INSTRUCTION = "## Useful instructions for task-solving\n- Solve the task step by step if you need to.\n- When you find an answer, verify the answer carefully. Include verifiable evidence with possible test case in your response if possible.\n- All your reply should be based on the provided facts.\n\n## How to verify?\n**You have to keep believing that everyone else's answers are wrong until they provide clear enough evidence.**\n- Verifying with step-by-step backward reasoning.\n- Write test cases according to the general task.\n\n## How to use code?\n- Suggest python code (in a python coding block) or shell script (in a sh coding block) for the Computer_terminal to execute.\n- If missing python packages, you can install the package by suggesting a `pip install` code in the ```sh ... ``` block.\n- When using code, you must indicate the script type in the coding block.\n- Do not the coding block which requires users to modify.\n- Do not suggest a coding block if it's not intended to be executed by the Computer_terminal.\n- The Computer_terminal cannot modify your code.\n- **Use 'print' function for the output when relevant**.\n- Check the execution result returned by the Computer_terminal.\n- Do not ask Computer_terminal to copy and paste the result.\n- If the result indicates there is an error, fix the error and output the code again. "
CODING_PROMPT class-attribute
instance-attribute
#
CODING_PROMPT = 'Does the following task need programming (i.e., access external API or tool by coding) to solve,\nor coding may help the following task become easier?\n\nTASK: {task}\n\nAnswer only YES or NO.\n'
AGENT_NAME_PROMPT class-attribute
instance-attribute
#
AGENT_NAME_PROMPT = '# Your task\nSuggest no more than {max_agents} experts with their name according to the following user requirement.\n\n## User requirement\n{task}\n\n# Task requirement\n- Expert\'s name should follow the format: [skill]_Expert.\n- Only reply the names of the experts, separated by ",".\n- If coding skills are required, they should be limited to Python and Shell.\nFor example: Python_Expert, Math_Expert, ... '
AGENT_SYS_MSG_PROMPT class-attribute
instance-attribute
#
AGENT_SYS_MSG_PROMPT = '# Your goal\n- According to the task and expert name, write a high-quality description for the expert by filling the given template.\n- Ensure that your description are clear and unambiguous, and include all necessary information.\n\n# Task\n{task}\n\n# Expert name\n{position}\n\n# Template\n{default_sys_msg}\n'
AGENT_DESCRIPTION_PROMPT class-attribute
instance-attribute
#
AGENT_DESCRIPTION_PROMPT = "# Your goal\nSummarize the following expert's description in a sentence.\n\n# Expert name\n{position}\n\n# Expert's description\n{sys_msg}\n"
AGENT_SEARCHING_PROMPT class-attribute
instance-attribute
#
AGENT_SEARCHING_PROMPT = '# Your goal\nConsidering the following task, what experts should be involved to the task?\n\n# TASK\n{task}\n\n# EXPERT LIST\n{agent_list}\n\n# Requirement\n- You should consider if the experts\' name and profile match the task.\n- Considering the effort, you should select less then {max_agents} experts; less is better.\n- Separate expert names by commas and use "_" instead of space. For example, Product_manager,Programmer\n- Only return the list of expert names.\n'
AGENT_SELECTION_PROMPT class-attribute
instance-attribute
#
AGENT_SELECTION_PROMPT = '# Your goal\nMatch roles in the role set to each expert in expert set.\n\n# Skill set\n{skills}\n\n# Expert pool (formatting with name: description)\n{expert_pool}\n\n# Answer format\n```json\n{{\n "skill_1 description": "expert_name: expert_description", // if there exists an expert that suitable for skill_1\n "skill_2 description": "None", // if there is no experts that suitable for skill_2\n ...\n}}\n```\n'
agent_model instance-attribute
#
agent_model = agent_model if isinstance(agent_model, list) else [agent_model]
set_builder_model #
set_agent_model #
clear_agent #
Clear a specific agent by name.
PARAMETER | DESCRIPTION |
---|---|
agent_name | the name of agent. TYPE: |
recycle_endpoint | trigger for recycle the endpoint server. If true, the endpoint will be recycled when there is no agent depending on. |
Source code in autogen/agentchat/contrib/captainagent/agent_builder.py
clear_all_agents #
Clear all cached agents.
Source code in autogen/agentchat/contrib/captainagent/agent_builder.py
build #
build(building_task, default_llm_config, coding=None, code_execution_config=None, use_oai_assistant=False, user_proxy=None, max_agents=None, **kwargs)
Auto build agents based on the building task.
PARAMETER | DESCRIPTION |
---|---|
building_task | instruction that helps build manager (gpt-4) to decide what agent should be built. TYPE: |
default_llm_config | specific configs for LLM (e.g., config_list, seed, temperature, ...). |
coding | use to identify if the user proxy (a code interpreter) should be added. |
code_execution_config | specific configs for user proxy (e.g., last_n_messages, work_dir, ...). |
use_oai_assistant | use OpenAI assistant api instead of self-constructed agent. |
user_proxy | user proxy's class that can be used to replace the default user proxy. TYPE: |
max_agents | Maximum number of agents to create for the task. If None, uses the value from self.max_agents. TYPE: |
**kwargs | Additional arguments to pass to _build_agents. - agent_configs: Optional list of predefined agent configurations to use. TYPE: |
RETURNS | DESCRIPTION |
---|---|
agent_list | a list of agents. TYPE: |
cached_configs | cached configs. |
Source code in autogen/agentchat/contrib/captainagent/agent_builder.py
374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
build_from_library #
build_from_library(building_task, library_path_or_json, default_llm_config, top_k=3, coding=None, code_execution_config=None, use_oai_assistant=False, embedding_model='all-mpnet-base-v2', user_proxy=None, **kwargs)
Build agents from a library. The library is a list of agent configs, which contains the name and system_message for each agent. We use a build manager to decide what agent in that library should be involved to the task.
PARAMETER | DESCRIPTION |
---|---|
building_task | instruction that helps build manager (gpt-4) to decide what agent should be built. TYPE: |
library_path_or_json | path or JSON string config of agent library. TYPE: |
default_llm_config | specific configs for LLM (e.g., config_list, seed, temperature, ...). |
top_k | number of results to return. TYPE: |
coding | use to identify if the user proxy (a code interpreter) should be added. |
code_execution_config | specific configs for user proxy (e.g., last_n_messages, work_dir, ...). |
use_oai_assistant | use OpenAI assistant api instead of self-constructed agent. |
embedding_model | a Sentence-Transformers model use for embedding similarity to select agents from library. As reference, chromadb use "all-mpnet-base-v2" as default. |
user_proxy | user proxy's class that can be used to replace the default user proxy. TYPE: |
**kwargs | Additional arguments to pass to _build_agents. TYPE: |
RETURNS | DESCRIPTION |
---|---|
agent_list | a list of agents. TYPE: |
cached_configs | cached configs. |
Source code in autogen/agentchat/contrib/captainagent/agent_builder.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 |
|
save #
Save building configs. If the filepath is not specific, this function will create a filename by encrypt the building_task string by md5 with "save_config_" prefix, and save config to the local path.
PARAMETER | DESCRIPTION |
---|---|
filepath | save path. |
Return
filepath: path save.
Source code in autogen/agentchat/contrib/captainagent/agent_builder.py
load #
Load building configs and call the build function to complete building without calling online LLMs' api.
PARAMETER | DESCRIPTION |
---|---|
filepath | filepath or JSON string for the save config. |
config_json | JSON string for the save config. |
use_oai_assistant | use OpenAI assistant api instead of self-constructed agent. |
**kwargs | Additional arguments to pass to _build_agents: - code_execution_config (Optional[dict[str, Any]]): If provided, overrides the code execution configuration from the loaded config. TYPE: |
RETURNS | DESCRIPTION |
---|---|
agent_list | a list of agents. TYPE: |
cached_configs | cached configs. |