Skip to content

Releases: microsoft/autogen

v0.4.3

22 Jan 16:14
da1c2bf
Compare
Choose a tag to compare

What's new

This is the first release since 0.4.0 with significant new features! We look forward to hearing feedback and suggestions from the community.

Chat completion model cache

One of the big missing features from 0.2 was the ability to seamlessly cache model client completions. This release adds ChatCompletionCache which can wrap any other ChatCompletionClient and cache completions.

There is a CacheStore interface to allow for easy implementation of new caching backends. The currently available implementations are:

import asyncio
import tempfile

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache


async def main():
    with tempfile.TemporaryDirectory() as tmpdirname:
        openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")

        cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](Cache(tmpdirname))
        cache_client = ChatCompletionCache(openai_model_client, cache_store)

        response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
        print(response)  # Should print response from OpenAI
        response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
        print(response)  # Should print cached response


asyncio.run(main())

ChatCompletionCache is not yet supported by the declarative component config, see the issue to track progress.

#4924 by @srjoglekar246

GraphRAG

This releases adds support for GraphRAG as a tool agents can call. You can find a sample for how to use this integration here, and docs for LocalSearchTool and GlobalSearchTool.

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
from autogen_ext.tools.graphrag import GlobalSearchTool
from autogen_agentchat.agents import AssistantAgent


async def main():
    # Initialize the OpenAI client
    openai_client = OpenAIChatCompletionClient(
        model="gpt-4o-mini",
    )

    # Set up global search tool
    global_tool = GlobalSearchTool.from_settings(settings_path="./settings.yaml")

    # Create assistant agent with the global search tool
    assistant_agent = AssistantAgent(
        name="search_assistant",
        tools=[global_tool],
        model_client=openai_client,
        system_message=(
            "You are a tool selector AI assistant using the GraphRAG framework. "
            "Your primary task is to determine the appropriate search tool to call based on the user's query. "
            "For broader, abstract questions requiring a comprehensive understanding of the dataset, call the 'global_search' function."
        ),
    )

    # Run a sample query
    query = "What is the overall sentiment of the community reports?"
    await Console(assistant_agent.run_stream(task=query))


if __name__ == "__main__":
    asyncio.run(main())

#4612 by @lspinheiro

Semantic Kernel model adapter

Semantic Kernel has an extensive collection of AI connectors. In this release we added support to adapt a Semantic Kernel AI Connector to an AutoGen ChatCompletionClient using the SKChatCompletionAdapter.

Currently this requires passing the kernel during create, and so cannot be used with AssistantAgent directly yet. This will be fixed in a future release (#5144).

#4851 by @lspinheiro

AutoGen to Semantic Kernel tool adapter

We also added a tool adapter, but this time to allow AutoGen tools to be added to a Kernel, called KernelFunctionFromTool.

#4851 by @lspinheiro

Jupyter Code Executor

This release also brings forward Jupyter code executor functionality that we had in 0.2, as the JupyterCodeExecutor.

Please note that this currently on supports local execution and should be used with caution.

#4885 by @Leon0402

Memory

It's still early on but we merged the interface for agent memory in this release. This allows agents to enrich their context from a memory store and save information to it. The interface is defined in core and AssistantAgent in agentchat accepts memory as a parameter now. There is an initial example memory implementation which simply injects all memories as system messages for the agent. The intention is for the memory interface to be able to be used for both RAG and agent memory systems going forward.

#4438 by @victordibia, #5053 by @ekzhu

Declarative config

We're continuing to expand support for declarative configs throughout the framework. In this release, we've added support for termination conditions and base chat agents. Once we're done with this, you'll be able to configure and entire team of agents with a single config file and have it work seamlessly with AutoGen studio. Stay tuned!

#4984, #5055 by @victordibia

Other

  • Add sources field to TextMentionTermination by @Leon0402 in #5106
  • Update gpt-4o model version to 2024-08-06 by @ekzhu in #5117

Bug fixes

  • Retry multiple times when M1 selects an invalid agent. Make agent sel… by @afourney in #5079
  • fix: normalize finish reason in CreateResult response by @ekzhu in #5085
  • Pass context between AssistantAgent for handoffs by @ekzhu in #5084
  • fix: ensure proper handling of structured output in OpenAI client and improve test coverage for structured output by @ekzhu in #5116
  • fix: use tool_calls field to detect tool calls in OpenAI client; add integration tests for OpenAI and Gemini by @ekzhu in #5122

Other changes

Read more

v0.4.2

16 Jan 00:07
Compare
Choose a tag to compare
  • Change async input strategy in order to remove unintentional and accidentally added GPL dependency (#5060)

Full Changelog: v0.4.1...v0.4.2

v0.4.1

13 Jan 23:50
cf8446b
Compare
Choose a tag to compare

What's Important

All Changes since v0.4.0

New Contributors

Full Changelog: v0.4.0...v0.4.1

v0.4.0

10 Jan 00:01
78ac9f8
Compare
Choose a tag to compare

What's Important

🎉 🎈 Our first stable release of v0.4! 🎈 🎉

To upgrade from v0.2, read the migration guide. For a basic setup:

pip install -U "autogen-agentchat" "autogen-ext[openai]"

You can refer to our updated README for more information about the new API.

Major Changes from v0.4.0.dev13

Change Log from v0.4.0.dev13: v0.4.0.dev13...v0.4.0

New Contributors to v0.4.0

❤️ Big thanks to all the contributors since the first preview version was open sourced. ❤️

Changes from v0.2.36

Read more

v0.4.0.dev13

30 Dec 22:32
fb1094d
Compare
Choose a tag to compare
v0.4.0.dev13 Pre-release
Pre-release

What's new

  • An initial version of the migration guide is ready. Find it here! (#4765)
  • Model family is now available in the model client (#4856)

Breaking changes

  • Previously deprecated module paths have been removed (#4853)
  • SingleThreadedAgentRuntime.process_next is now blocking and has moved to be an internal API (#4855)

Fixes

Doc changes

  • Migration guide for 0.4 by @ekzhu in #4765
  • Clarify tool use in agent tutorial by @ekzhu in #4860
  • AgentChat tutorial update to include model context usage and langchain tool by @ekzhu in #4843
  • Add missing model context attribute by @Leon0402 in #4848

Other

New Contributors

Full Changelog: v0.4.0.dev12...v0.4.0.dev13

v0.4.0.dev12

27 Dec 20:09
d933b9a
Compare
Choose a tag to compare
v0.4.0.dev12 Pre-release
Pre-release

Important Changes

  • run and run_stream now support a list of messages as task input.
  • Introduces AgentEvent union type in AgentChat, for all messages that are not meant to be consumed by other agents. Replace AgentMessage with AgentEvent | ChatMessage union type in your code, e.g., in your custom selector function for SelectorGroupChat and processing code for TaskResult.messages.
  • Introduce ToolCallSummaryMessage to ChatMessage for tool call results from agents. Read AssistantAgent Doc
  • Introduce ModelContext parameter for AssistantAgent, allow usage of BufferedChatCompletionContext to limit context window size sent to model.
  • Introduce ComponentConfig and add configuration loader for ChatCompletionClient. See Component Config
  • Moved autogen_core.tools.PythonCodeExecutorTool to autogen_ext.tools.code_execution.PythonCodeExecutionTool.
  • Documentation updates.

Upcoming Changes

  • Deprecating @message_handler. Use @event or @rpc to annotate handlers instead. @message_handler will be kept with a deprecation warning until further notice. #4828
  • Token counting mechanism bug fixes #4719

New Contributors

Full Changelog: v0.4.0.dev11...v0.4.0.dev12

v0.2.40

15 Dec 06:11
3b4c017
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.2.39...v0.2.40

v0.4.0.dev11

12 Dec 06:21
6c1f638
Compare
Choose a tag to compare
v0.4.0.dev11 Pre-release
Pre-release

Important Changes:

  1. autogen_agentchat.agents.AssistantAgent behavior has been updated in dev10. It will not perform multiple rounds of tool calls -- only 1 round of tool call, followed by an optional reflection step to provide a natural language response. Please see the API doc for AssistantAgent.
  2. Module renamings:
  • autogen_core.base --> autogen_core
  • autogen_core.application.SingleThreadedAgentRuntime --> autogen_core.SingleThreadedAgentRuntime.
  • autogen_core.application.logging --> autogen_core.logging.
  • autogen_core.components.* --> autogen_core.*, for models, model_context, tools, tool_agent and code_executor.
  • autogen_core.components.code_executor.LocalCommandLineCodeExecutor --> autogen_ext.code_executors.local.LocalCommandLineCodeExecutor
  • autogen_agentchat.task.Console --> autogen_agentchat.ui.Console.
  • autogen_agentchat.task.<termination conditions> --> autogen_agentchat.conditions.<termination_conditions>.
  • autogen_ext.models gets divided into autogen_ext.models.openai, autogen_ext.models.replay.
  • autogen_ext.agents gets divided into separate submodules for each extension agent class.
  • autogen_ext.code_executors gets divided into separate submodules for each executor class.

New Contributors

Full Changelog: v0.4.0.dev9...v0.4.0.dev11

v0.2.39

25 Nov 21:49
ebb3e24
Compare
Choose a tag to compare

What's Changed

  • fix: GroupChatManager async run throws an exception if no eligible speaker by @leryor in #4283
  • Bugfix: Web surfer creating incomplete copy of messages by @Hedrekao in #4050

New Contributors

Full Changelog: v0.2.38...v0.2.39

v0.2.38

11 Nov 02:53
8a8fcd8
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.2.37...v0.2.38