New

How to Build a Multi-Agent AI App with AutoGen

Notebook


SingleStore Notebooks

How to Build a Multi-Agent AI App with AutoGen

Note

This notebook can be run on a Free Starter Workspace. To create a Free Starter Workspace navigate to Start using the left nav. You can also use your existing Standard or Premium workspace with this Notebook.

Python Notebook Introduction

This Jupyter notebook is designed to demonstrate the use of various Python libraries for text processing, document loading, and vector embeddings. It also showcases the use of the OpenAI API for generating embeddings and the SingleStoreDB for storing and retrieving documents.

The notebook is divided into several sections:

  1. Installation of Required Libraries: This section covers the installation of necessary libraries such as langchain_community, pyautogen, langchain_openai, langchain_text_splitters, and unstructured.

  2. Data Loading and Preparation: This section involves loading a markdown document from a URL and preparing it for further processing.

  3. Document Splitting and Embedding Generation: This section demonstrates how to split the loaded document into smaller parts and generate embeddings for each part using the OpenAI API.

  4. SingleStoreDB Setup: This section covers the setup of SingleStoreDB for storing and retrieving documents.

  5. Agent Setup and Group Chat Simulation: This section demonstrates the setup of various agents (like a boss, coder, product manager, and code reviewer) and simulates a group chat among them to solve a given problem.

  6. Chat Simulation: This section runs the chat simulation without and with the Retrieve and Generate (RAG) model.

Please ensure that you have the necessary API keys and environment variables set up before running this notebook.

In [1]:

1

# Check if the database is running on a shared tier

2

shared_tier_check = %sql show variables like 'is_shared_tier'

3

4

# If not on a shared tier, or if the shared tier is turned off, drop the existing database and create a new one

5

if not shared_tier_check or shared_tier_check[0][1] == 'OFF':

6

%sql DROP DATABASE IF EXISTS autogen

7

%sql CREATE DATABASE autogen

In [2]:

1

!pip install --quiet langchain_community pyautogen langchain_openai langchain_text_splitters unstructured

In [3]:

1

!pip install --quiet markdown

In [4]:

1

import requests

2

3

r = requests.get("https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md")

4

open('example.md', 'wb').write(r.content)

In [5]:

1

from langchain_community.vectorstores import SingleStoreDB

2

from langchain_openai import OpenAIEmbeddings

3

from langchain_community.document_loaders import UnstructuredMarkdownLoader

4

from langchain_text_splitters import CharacterTextSplitter

5

from typing import List, Dict, Union

6

import os

7

8

loader = UnstructuredMarkdownLoader("./example.md")

9

10

os.environ["OPENAI_API_KEY"] = "api-key"

11

12

data = loader.load()

13

14

text_splitter = CharacterTextSplitter()

15

16

docs = text_splitter.split_documents(data)

17

18

embeddings = OpenAIEmbeddings()

19

20

os.environ["SINGLESTOREDB_URL"] = "admin:pass@host:3306/db"

In [6]:

1

singlestore_db = SingleStoreDB.from_documents(

2

docs,

3

embeddings,

4

table_name="notebook2", # use table with a custom name

5

)

In [7]:

1

!pip install --quiet pyautogen[retrievechat]

In [8]:

1

import autogen

2

from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent

3

from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent

4

from autogen import config_list_from_json

5

from autogen import AssistantAgent

In [9]:

1

class SingleStoreRetrieveUserProxyAgent(RetrieveUserProxyAgent):

2

def __init__(self, singlestore_db: SingleStoreDB, **kwargs):

3

super().__init__(**kwargs)

4

self.singlestore_db = singlestore_db

5

6

def query_vector_db(

7

self,

8

query_texts: List[str],

9

n_results: int = 10,

10

search_string: str = "",

11

**kwargs,

12

) -> Dict[str, List[List[str]]]:

13

documents = []

14

ids = []

15

for query_index, query_text in enumerate(query_texts):

16

searched_docs = self.singlestore_db.similarity_search(

17

query=query_text,

18

k=n_results,

19

)

20

# Assuming searched_docs is a list of documents with only 'page_content' property

21

batch_documents = [doc.page_content for doc in searched_docs]

22

documents.append(batch_documents)

23

24

# Generate a unique ID for each document based on enumeration

25

batch_ids = [f"{query_index}-{i}" for i in range(len(batch_documents))]

26

ids.append(batch_ids)

27

28

return {

29

"ids": ids,

30

"documents": documents,

31

}

32

33

def retrieve_docs(self, problem: str, n_results: int = 20, search_string: str = "", **kwargs):

34

results = self.query_vector_db(

35

query_texts=[problem],

36

n_results=n_results,

37

search_string=search_string,

38

**kwargs,

39

)

40

41

self._results = results

In [10]:

1

import os

2

os.environ["AUTOGEN_USE_DOCKER"] = "False"

In [11]:

1

llm_config = {

2

"config_list": [{"model": "gpt-3.5-turbo", "api_key": os.environ["OPENAI_API_KEY"]}],

3

}

4

5

def termination_msg(x):

6

return isinstance(x, dict) and "TERMINATE" == str(x.get("content", ""))[-9:].upper()

7

8

9

boss = autogen.UserProxyAgent(

10

name="Boss",

11

is_termination_msg=termination_msg,

12

human_input_mode="NEVER",

13

code_execution_config=False, # we don't want to execute code in this case.

14

default_auto_reply="Reply `TERMINATE` if the task is done.",

15

description="The boss who ask questions and give tasks.",

16

)

17

18

boss_aid = SingleStoreRetrieveUserProxyAgent(

19

name="Boss_Assistant",

20

is_termination_msg=termination_msg,

21

human_input_mode="NEVER",

22

max_consecutive_auto_reply=3,

23

retrieve_config={

24

"task": "code",

25

},

26

code_execution_config=False, # we don't want to execute code in this case.

27

description="Assistant who has extra content retrieval power for solving difficult problems.",

28

singlestore_db=singlestore_db

29

)

30

31

coder = autogen.AssistantAgent(

32

name="Senior_Python_Engineer",

33

is_termination_msg=termination_msg,

34

system_message="You are a senior python engineer, you provide python code to answer questions. Reply `TERMINATE` in the end when everything is done.",

35

llm_config=llm_config,

36

description="Senior Python Engineer who can write code to solve problems and answer questions.",

37

)

38

39

pm = autogen.AssistantAgent(

40

name="Product_Manager",

41

is_termination_msg=termination_msg,

42

system_message="You are a product manager. Reply `TERMINATE` in the end when everything is done.",

43

llm_config=llm_config,

44

description="Product Manager who can design and plan the project.",

45

)

46

47

reviewer = autogen.AssistantAgent(

48

name="Code_Reviewer",

49

is_termination_msg=termination_msg,

50

system_message="You are a code reviewer. Reply `TERMINATE` in the end when everything is done.",

51

llm_config=llm_config,

52

description="Code Reviewer who can review the code.",

53

)

54

55

PROBLEM = "How to use spark for parallel training in FLAML? Give me sample code."

56

57

58

def _reset_agents():

59

boss.reset()

60

boss_aid.reset()

61

coder.reset()

62

pm.reset()

63

reviewer.reset()

64

65

66

def rag_chat():

67

_reset_agents()

68

groupchat = autogen.GroupChat(

69

agents=[boss_aid, pm, coder, reviewer], messages=[], max_round=12, speaker_selection_method="round_robin"

70

)

71

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

72

73

# Start chatting with boss_aid as this is the user proxy agent.

74

boss_aid.initiate_chat(

75

manager,

76

problem=PROBLEM,

77

n_results=3,

78

)

79

80

81

def norag_chat():

82

_reset_agents()

83

groupchat = autogen.GroupChat(

84

agents=[boss, pm, coder, reviewer],

85

messages=[],

86

max_round=12,

87

speaker_selection_method="auto",

88

allow_repeat_speaker=False,

89

)

90

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

91

92

# Start chatting with the boss as this is the user proxy agent.

93

boss.initiate_chat(

94

manager,

95

message=PROBLEM,

96

)

97

98

99

def call_rag_chat():

100

_reset_agents()

101

102

# In this case, we will have multiple user proxy agents and we don't initiate the chat

103

# with RAG user proxy agent.

104

# In order to use RAG user proxy agent, we need to wrap RAG agents in a function and call

105

# it from other agents.

106

def retrieve_content(

107

message: Annotated[

108

str,

109

"Refined message which keeps the original meaning and can be used to retrieve content for code generation and question answering.",

110

],

111

n_results: Annotated[int, "number of results"] = 3,

112

) -> str:

113

boss_aid.n_results = n_results # Set the number of results to be retrieved.

114

# Check if we need to update the context.

115

update_context_case1, update_context_case2 = boss_aid._check_update_context(message)

116

if (update_context_case1 or update_context_case2) and boss_aid.update_context:

117

boss_aid.problem = message if not hasattr(boss_aid, "problem") else boss_aid.problem

118

_, ret_msg = boss_aid._generate_retrieve_user_reply(message)

119

else:

120

ret_msg = boss_aid.generate_init_message(message, n_results=n_results)

121

return ret_msg if ret_msg else message

122

123

boss_aid.human_input_mode = "NEVER" # Disable human input for boss_aid since it only retrieves content.

124

125

for caller in [pm, coder, reviewer]:

126

d_retrieve_content = caller.register_for_llm(

127

description="retrieve content for code generation and question answering.", api_style="function"

128

)(retrieve_content)

129

130

for executor in [boss, pm]:

131

executor.register_for_execution()(d_retrieve_content)

132

133

groupchat = autogen.GroupChat(

134

agents=[boss, pm, coder, reviewer],

135

messages=[],

136

max_round=12,

137

speaker_selection_method="round_robin",

138

allow_repeat_speaker=False,

139

)

140

141

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

142

143

# Start chatting with the boss as this is the user proxy agent.

144

boss.initiate_chat(

145

manager,

146

message=PROBLEM,

147

)

In [12]:

1

norag_chat()

In [13]:

1

rag_chat()

In [14]:

1

shared_tier_check = %sql show variables like 'is_shared_tier'

2

if not shared_tier_check or shared_tier_check[0][1] == 'OFF':

3

%sql DROP DATABASE IF EXISTS autogen

Details


About this Template

Learn how to build a multi-agent group chat with RAG using Autogen and SingleStore

Notebook Icon

This Notebook can be run in Shared Tier, Standard and Enterprise deployments.

Tags

starterautogenragmultiagentgroupchat

License

This Notebook has been released under the Apache 2.0 open source license.

See Notebook in action

Launch this notebook in SingleStore and start executing queries instantly.