A Deep Dive Into Vector Databases
Notebook
Required Installations
In [1]:
!pip install openai numpy pandas singlestoredb langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-openai==0.0.6
Vector Embedding Example
In this example, we demonstrate a rule based system that generates vector embeddings based on a word. The embedding that we generate contains 5 main features:
Length of word
Number of vowels in the word (normalized to the length of the word)
Whether the word starts with a vowel (1) or not (0)
Whether the word ends with a vowel (1) or not (0)
Percentage of consonants in the word
This is a simple implementation of a rule based system to demonstrate the essence of what vector embedding models do. However, they utlize neural networks that are trained on vast datasets to learn key features and self-corrects using gradient descent.
In [2]:
def word_to_vector(word):# Define some basic rules for our vector componentsvector = [0] * 5 # Initialize a vector of 5 dimensions# Rule 1: Length of the word (normalized to a max of 10 characters for simplicity)vector[0] = len(word) / 10# Rule 2: Number of vowels in the word (normalized to the length of the word)vowels = 'aeiou'vector[1] = sum(1 for char in word if char in vowels) / len(word)# Rule 3: Whether the word starts with a vowel (1) or not (0)vector[2] = 1 if word[0] in vowels else 0# Rule 4: Whether the word ends with a vowel (1) or not (0)vector[3] = 1 if word[-1] in vowels else 0# Rule 5: Percentage of consonants in the wordvector[4] = sum(1 for char in word if char not in vowels and char.isalpha()) / len(word)return vector# Example usageword = "example"vector = word_to_vector(word)print(f"Word: {word}\nVector: {vector}")
Vector Similarity Example
In this example, we demonstrate a way to determine the similarity between two vectors. There are many techniques to find the similiarity between two vectors but one of the most popular ways is using cosine similarity. Consine similarity is the the dot product between the two vectors divided by the product of the vector's normals (magnitudes).
This is just an example to show how vector databases search for similar vectors. The fundamental problem with a system like this is our rule-based embedding because it does not give us a semantic understanding of the word/sentences/paragraphs. Instead, it gives us a classification of a single word's structure.
In [3]:
import numpy as npdef cosine_similarity(vector_a, vector_b):# Calculate the dot product of vectorsdot_product = np.dot(vector_a, vector_b)# Calculate the norm (magnitude) of each vectornorm_a = np.linalg.norm(vector_a)norm_b = np.linalg.norm(vector_b)# Calculate cosine similaritysimilarity = dot_product / (norm_a * norm_b)return similarity# Example usageword1 = "example"word2 = "sample"vector1 = word_to_vector(word1)vector2 = word_to_vector(word2)# Calculate and print cosine similaritysimilarity_score = cosine_similarity(vector1, vector2)print(f"Cosine similarity between '{word1}' and '{word2}': {similarity_score}")
Embedding Models
In order to generate semantic understanding of language within vectors, embedding models are required. Embedding models are trained on vast corpus of language data. Training embedding models starts by initializing word embeddings with random vectors. Each word in the vocabulary is assigned a vector of real numbers. They use neural networks trained on large datasets to predict a word from its context (Continuous Bag of Words model) or to predict the context given a word (Skip-Gram model). During training, the model adjusts the word vectors to minimize some loss function, often related to the likelihood of observing a word given its context (or vice versa) through gradient descent.
Examples of embedding models include Word2Vec, GloVe, BERT, OpenAI text-embedding.
In [4]:
OPENAI_KEY = "INSERT OPENAI KEY"from openai import OpenAIclient = OpenAI(api_key=OPENAI_KEY)def openAIEmbeddings(input):response = client.embeddings.create(input="input",model="text-embedding-3-small")return response.data[0].embeddingprint(openAIEmbeddings("Golden Retreiver"))
As you can see, this is a huge vector! Over 1000 dimensions just in this one vector. This is why it is important for us to have good dimensionality reduction techniques during the similarity searches.
Creating a vector database with SingleStoreDB
In the following code we create a vector datbase with SingleStoreDB. We utilize Langchain to chunk and split the raw text into documents and use the OpenAI embeddings model to generate the vector embeddings. We then take the raw documents and embeddings and create a table with the columns "docs" and "embeddings".
To test this out, we perform a similarity search based on a query and it returns the most similar document in the vector database.
In [5]:
import openaifrom langchain.text_splitter import CharacterTextSplitterfrom langchain_community.document_loaders import TextLoaderfrom langchain_community.embeddings import OpenAIEmbeddingsfrom langchain_community.vectorstores.singlestoredb import SingleStoreDBfrom openai import OpenAIimport osimport pandas as pd# Load and process documentsloader = TextLoader("michael_jackson.txt") # use your own documentdocuments = loader.load()text_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# Generate embeddings and create a document search databaseembeddings = OpenAIEmbeddings(api_key=OPENAI_KEY)# Create Vector Databasevector_database = SingleStoreDB.from_documents(docs, embeddings, table_name="mjackson") # create your own tablequery = "How old was Michael Jackson when he died?"docs = vector_database.similarity_search(query)print(docs[0].page_content)
Retrieval Augmented Generation System
RAG combines large language models with a retrieval mechanism to search a database for relevant information before generating responses. It utilizes real-world data from retrieved documents to ground responses, enhancing factual accuracy and reducing hallucinations. Documents are vectorized using embeddings and stored in a vector database for efficient retrieval. SingleStoreDB serves as a great vector database. The user query is converted into a vector, and a vector search is performed in the database to find documents relevant to that specific query. The system returns the documents with the highest relevance scores, which are then fed to the chatbot for generating informed responses.
In [6]:
import osimport openaifrom langchain.text_splitter import CharacterTextSplitterfrom langchain_community.document_loaders import TextLoaderfrom langchain_community.embeddings import OpenAIEmbeddingsfrom langchain_community.vectorstores.singlestoredb import SingleStoreDBfrom openai import OpenAI# Set up API keys and database URLclient = OpenAI(api_key=OPENAI_KEY)# Load and process documentsloader = TextLoader("michael_jackson.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# Generate embeddings and create a document search databaseembeddings = OpenAIEmbeddings(OPENAI_KEY)docsearch = SingleStoreDB.from_documents(docs, embeddings, table_name="mjackson")# Chat loopwhile True:# Get user inputuser_query = input("\nYou: ")# Check for exit commandif user_query.lower() in ['quit', 'exit']:print("Exiting chatbot.")break# Perform similarity searchdocs = docsearch.similarity_search(user_query)if docs:context = docs[0].page_content# Generate response using OpenAI GPT-4response = client.chat.completions.create(model="gpt-4",messages=[{"role": "system", "content": "Context: " + context},{"role": "user", "content": user_query}],stream=True,max_tokens=500,)# Output the responseprint("AI: ", end="")for chunk in response:if chunk.choices[0].delta.content is not None:print(chunk.choices[0].delta.content, end="")else:print("AI: Sorry, I couldn't find relevant information.")
Details
About this Template
Using SingleStoreDB as a vector database and vector database use cases.
This Notebook can be run in Standard and Enterprise deployments.
Tags
License
This Notebook has been released under the Apache 2.0 open source license.