Conversation Buffer Memory¶
The "base memory class" seen in the previous example is now put to use in a higher-level abstraction provided by LangChain:
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
In [2]:
Copied!
from cqlsession import getCQLSession, getCQLKeyspace
astraSession = getCQLSession()
astraKeyspace = getCQLKeyspace()
from cqlsession import getCQLSession, getCQLKeyspace
astraSession = getCQLSession()
astraKeyspace = getCQLKeyspace()
In [3]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=astraSession,
keyspace='langchain',
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=astraSession,
keyspace='langchain',
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history is specified:
In [4]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
Below is the logic to instantiate the LLM of choice. We choose to leave it in the notebooks for clarity.
In [5]:
Copied!
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
Create a chain¶
As the conversation proceeds, a growing history of past exchanges finds it way automatically to the prompt that the LLM receives:
In [6]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [7]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[7]:
" Hi there! Roasting an apple is a simple and delicious way to enjoy the fruit. To roast an apple, you'll need to preheat your oven to 350 degrees Fahrenheit. Wash and core the apple, then place it in a baking dish. Sprinkle the apple with cinnamon, nutmeg, and a little sugar. Bake the apple for 25 minutes, or until the top is lightly browned. Enjoy!"
In [8]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a simple and delicious way to enjoy the fruit. To roast an apple, you'll need to preheat your oven to 350 degrees Fahrenheit. Wash and core the apple, then place it in a baking dish. Sprinkle the apple with cinnamon, nutmeg, and a little sugar. Bake the apple for 25 minutes, or until the top is lightly browned. Enjoy! Human: Can I do it on a bonfire? AI: > Finished chain.
Out[8]:
" Unfortunately, I don't have enough information to answer your question. However, if you're looking for a way to roast an apple over an open flame, you may want to consider using foil to wrap the apple and then place it in the bonfire."
In [9]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a simple and delicious way to enjoy the fruit. To roast an apple, you'll need to preheat your oven to 350 degrees Fahrenheit. Wash and core the apple, then place it in a baking dish. Sprinkle the apple with cinnamon, nutmeg, and a little sugar. Bake the apple for 25 minutes, or until the top is lightly browned. Enjoy! Human: Can I do it on a bonfire? AI: Unfortunately, I don't have enough information to answer your question. However, if you're looking for a way to roast an apple over an open flame, you may want to consider using foil to wrap the apple and then place it in the bonfire. Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[9]:
" Unfortunately, I don't have enough information to answer your question. However, if you're looking for a way to cook an apple in the microwave, you can do so by cutting the apple into small cubes and placing them in a microwave-safe bowl with a few tablespoons of water. Cook the apples on high for two to three minutes. The apples will be tender and juicy."
In [10]:
Copied!
message_history.messages
message_history.messages
Out[10]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content=" Hi there! Roasting an apple is a simple and delicious way to enjoy the fruit. To roast an apple, you'll need to preheat your oven to 350 degrees Fahrenheit. Wash and core the apple, then place it in a baking dish. Sprinkle the apple with cinnamon, nutmeg, and a little sugar. Bake the apple for 25 minutes, or until the top is lightly browned. Enjoy!", additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content=" Unfortunately, I don't have enough information to answer your question. However, if you're looking for a way to roast an apple over an open flame, you may want to consider using foil to wrap the apple and then place it in the bonfire.", additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content=" Unfortunately, I don't have enough information to answer your question. However, if you're looking for a way to cook an apple in the microwave, you can do so by cutting the apple into small cubes and placing them in a microwave-safe bowl with a few tablespoons of water. Cook the apples on high for two to three minutes. The apples will be tender and juicy.", additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
You can craft your own prompt (through a PromptTemplate
object) and still take advantage of the chat memory handling by LangChain:
In [11]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [12]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [13]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=astraSession,
keyspace='langchain',
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=astraSession,
keyspace='langchain',
)
f_message_history.clear()
In [14]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [15]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [16]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: > Finished chain.
Out[16]:
" Oh yeah, I love springs! They're so bouncy and full of energy. It's the perfect time of year to get out and enjoy nature! Just like me, I'm always looking for ways to spring into action!"
In [17]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Oh yeah, I love springs! They're so bouncy and full of energy. It's the perfect time of year to get out and enjoy nature! Just like me, I'm always looking for ways to spring into action! Human: Er ... I mean the other type actually. AI: > Finished chain.
Out[17]:
" Ah, right! Springs as in the metal coils that store energy? Well, they're pretty great too. They make a lot of things work, from watches to cars to toys. They even play a big role in the world of robotics!"