
Searching all of Wikipedia
Notebook

Note
This tutorial is meant for Standard & Premium Workspaces. You can't run this with a Free Starter Workspace due to restrictions on Storage. Create a Workspace using +group in the left nav & select Standard for this notebook. Gallery notebooks tagged with "Starter" are suitable to run on a Free Starter Workspace
In this notebook, we've embarked on the task to implement semantic search and Retrieval-Augmented Generation (RAG) across Wikipedia's extensive database, using SingleStore's indexed ANN search capabilities.
We have focused on a subset of Wikipedia, scraping 1,800 video game articles to obtain real OpenAI embeddings for about 40,000 paragraphs, supplemented by 10 million mock vectors for a scaled-up simulation. We then stored these vectors in a SingleStore database, applying different indexing options to enhance search efficiency. We show remarkable improvements in query response times, dropping to sub 100 milliseconds with indexing.
We also integrated this system with a RAG-based chat, where you can ask and retrieve contextual answers based on the video game information in the database. Additionally, I've shared the technical details and resources, including our Python code and data links, in this GitHub repository and an AWS S3 bucket (S3 URI: s3://wikipedia-video-game-data/video-game-embeddings(1).csv).
Part 0: Creating your workspace and database
For the purposes of this demo, we are only generating 10 million vectors to search through. By our estimations, Wikipedia contains around 160 million paragraphs as a whole. A quick heuristic for your workspace sizing:
160 million vectors can be handled by an S-32 Workspace
20 million vectors can be handled by an S-4 Workspace
You can now extrapolate the workspace size you require from the number of vectors you want to generate!
In [1]:
1%%sql2drop database if exists video_game_wikipedia;3create database video_game_wikipedia;4use video_game_wikipedia;5 6-- create our table (note the usage of the vector data type)7create table vecs(8 id bigint(20),9 url text default null,10 paragraph text default null,11 v vector(1536) not null,12 shard key(id),13 key(id) using hash,14 fulltext (paragraph)15);
Part 1: Generating the Mock Vectors
A vecs
table is created with fields for ID, URL, paragraph text, and the embedding vector.
Then, we define several functions:
randbetween
: generates random floats in a specified rangegen_vector
: creates a vector of given length filled with random valuesnormalize
: adjusts a vector to unit lengthnorm1536
: normalizes a vector of dimension 1536nrandv1536
: generates a normalized, random vector of dimension 1536
Finally, we populate vecs
with 10,000,000 rows of mock vectors.
In [2]:
1%%sql2-- generates random floats in a specified range3create or replace function randbetween(a float, b float) returns float4as5begin6 return (rand()*(b - a) + a);7end ;
In [3]:
1%%sql2-- set the sql_mode so that the following function returns the expected result3set sql_mode = pipes_as_concat;4 5-- creates a vector of given length filled with random values6create or replace function gen_vector(length int) returns text as7declare s text = "[";8begin9 if length < 2 then10 raise user_exception("length too short: " || length);11 end if;12 13 for i in 1..length-1 loop14 s = s || randbetween(-1,1) || "," ;15 end loop;16 s = s || randbetween(-1,1) || "]";17 return s;18end;
In [4]:
1%%sql2-- adjusts a vector to unit length3create or replace function normalize(v blob) returns blob as4declare5 squares blob = vector_mul(v,v);6 length float = sqrt(vector_elements_sum(squares));7begin8 return scalar_vector_mul(1/length, v);9end;
In [5]:
1%%sql2-- normalizes a vector of dimension 15363create or replace function norm1536(v vector(1536)) returns vector(1536) as4begin5 return normalize(v) :> vector(1536);6end;
In [6]:
1%%sql2-- generates a normalized, random vector of dimension 15363create or replace function nrandv1536() returns vector(1536) as4begin5 return norm1536(gen_vector(1536));6end;
In [7]:
1%%sql2-- procedure to populate `vecs` with `num_rows` vectors3create or replace procedure insert_vectors(num_rows bigint) as4declare c int;5begin6 select count(*) into c from vecs;7 loop8 insert into vecs (id, v)9 select id + (select max(id) from vecs), nrandv1536()10 from vecs11 where id <= 128 * 1024; /* chunk size 128K so we can see progress */12 select count(*) into c from vecs;13 if c >= num_rows then14 exit;15 end if;16 end loop;17end;
In [8]:
1%%sql2-- run the procedure to populate `vecs` with 10,000,000 vectors3-- this will take around 20 min4insert into vecs (id, v) values (1, nrandv1536());5call insert_vectors(10000000);
As a quick aside, if you want to generate the full 160 million vectors, you simply have to change the num_rows
to 160,000,000: call insert_vectors(160000000);
Part 2: Getting the Wikipedia video game data
We will use a SingleStore pipeline named wiki_pipeline
to import data from an S3 bucket into vecs
. The pipeline is configured to load data from a CSV file located at s3://wikipedia-video-game-data/video-game-embeddings(1).csv
. Since the S3 bucket is open, the credentials section is left empty.
In [9]:
1%%sql2-- since the bucket is open, you can leave the credentials clause as it is3create or replace pipeline `wiki_pipeline` as4load data S3 's3://wikipedia-video-game-data/video-game-embeddings(1).csv'5config '{"region":"us-west-1"}'6credentials '{"aws_access_key_id": "",7 "aws_secret_access_key": ""}'8skip duplicate key errors9into table `vecs`10format csv11fields terminated by ','12enclosed by '"'13lines terminated by '\r\n';
In [10]:
1%%sql2-- start the pipeline!3start pipeline `wiki_pipeline`;
In [11]:
1%%sql2-- monitor the pipeline!3select DATABASE_NAME, PIPELINE_NAME, BATCH_ID, BATCH_STATE, START_TIME, ROWS_STREAMED, ROWS_PER_SEC4from information_schema.PIPELINES_BATCHES_SUMMARY5order by BATCH_ID;
Part 3: Building the vector indexes
Now, we have all the data in our table vecs
. Let's go ahead and build our vector index. SingleStore gives us many options for our index with many tunable parameters. We will stick with the IVF indexes with default parameters.
In [12]:
1%%sql2alter table vecs add vector index auto (v) INDEX_OPTIONS '{"index_type":"AUTO"}';
In [13]:
1%%sql2alter table vecs add vector index ivf_flat (v) INDEX_OPTIONS '{"index_type":"IVF_FLAT"}';
In [14]:
1%%sql2alter table vecs add vector index ivf_pq (v) INDEX_OPTIONS '{"index_type":"IVF_PQ"}';
Part 4: Testing our indexes
Now that we have indexed our 10M vector dataset, let us now run some queries to test index performance!
We have chosen a test vector whose paragraph is about Nintendo's Rad Racer video game. We will compare the performance of an exact K-nearest neighbor search to the searches with our ANN indexes.
As we will see, we get an order of magnitude improvement when using an index in comparison to the exact KNN search!
In [15]:
1%%sql2set @qv = (select v from vecs where id = 1125899906845489);3 4-- NO INDEX: exact kNN search5select paragraph, v <*> @qv as sim6from vecs7order by sim use index () desc8limit 5;
In [16]:
1%%sql2set @qv = (select v from vecs where id = 1125899906845489);3 4-- AUTO index5select paragraph, v <*> @qv as sim6from vecs7order by sim use index (auto) desc8limit 5;
In [17]:
1%%sql2set @qv = (select v from vecs where id = 1125899906845489);3 4-- IVF_FLAT5select paragraph, v <*> @qv as sim6from vecs7order by sim use index (ivf_flat) desc8limit 5;
In [18]:
1%%sql2set @qv = (select v from vecs where id = 1125899906845489);3 4-- IVF_PQ5select paragraph, v <*> @qv as sim6from vecs7order by sim use index (ivf_pq) desc8limit 5;
Part 5: Hybrid Search in SingleStore
Let us now see how we can implement a "hybrid search" in SingleStore! This is going to be a query that combines two powerful tools: a fulltext search and a semantic search!
In [19]:
1%%sql2 3-- set the query vector4set @v_mario = (select v5 from vecs where url = "https://en.wikipedia.org/wiki/Super_Mario_Kart"6 order by id7 limit 1);8 9-- building the hybrid search10with fts as(11 select id, paragraph, match (paragraph) against ('Mario Kart') as score12 from vecs13 where match (paragraph) against ('Mario Kart')14 order by score desc15 limit 20016),17vs as (18 select id, paragraph, v <*> @v_mario as score19 from vecs20 order by score use index (auto) desc21 limit 20022)23select vs.id,24 vs.paragraph,25 .3 * ifnull(fts.score, 0) + .7 * vs.score as hybrid_score,26 vs.score as vec_score,27 ifnull(fts.score, 0) as ft_score28from fts full outer join vs29 on fts.id = vs.id30order by hybrid_score desc31limit 5;
Part 6: Chatting with the Video Game data!
search_wiki_page
Function:Conducts semantic search in a database.
Uses embeddings from
get_embedding
for query input.Finds top 'k' paragraphs in
vecs
table, ranked by similarity to query.Measures and prints search execution time.
ask_wiki_page
Function:Utilizes results from
search_wiki_page
for chatbot input.Generates a query for an OpenAI GPT model-based chatbot.
In [20]:
1!pip3 install openai --quiet
In [21]:
1import sqlalchemy as sa2from openai import OpenAI3import getpass4import os5import time6import json
In [22]:
1# OpenAI connection2OPENAI_API_KEY = os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key: ')3client = OpenAI()4EMBEDDING_MODEL = 'text-embedding-ada-002'5GPT_MODEL = 'gpt-3.5-turbo'6 7# SingleStore connection8engine = sa.create_engine(connection_url)9connection = engine.connect()
In [23]:
1def get_embedding(text, model=EMBEDDING_MODEL):2 '''Generates the OpenAI embedding from an input `text`.'''3 if isinstance(text, str):4 response = client.embeddings.create(input=[text], model=model)5 return json.dumps(response.data[0].embedding)
In [24]:
1def search_wiki_page(query, limit=5):2 '''Returns a df of the top k matches to the query ordered by similarity.'''3 query_embedding_vec = get_embedding(query)4 statement = sa.text(5 f'''select paragraph, v <*> :query_embedding :> vector(1536) AS similarity6 from vecs7 order by similarity use index (auto) desc8 limit :limit;'''9 )10 print("Searching for matches...")11 start_time = time.time()12 results = connection.execute(statement, {"query_embedding": query_embedding_vec, "limit": limit})13 end_time = time.time()14 execution_time = end_time - start_time15 print(f"Search complete in {execution_time} seconds.")16 results_as_dict = results.fetchall()17 return results_as_dict
In [25]:
1def ask_wiki_page(query, limit=5, temp=0.0):2 '''Uses RAG to answer a question from the wiki page'''3 results = search_wiki_page(query, limit)4 print("Asking Chatbot...")5 prompt = f'''Excerpt from the conversation history:6 {results}7 Question: {query}8 9 Based on the conversation history, try to provide the most accurate answer to the question.10 Consider the details mentioned in the conversation history to formulate a response that is as11 helpful and precise as possible. please provide links to WIKIPEDIA ARTICLES TO LOOK AT FOR MORE INFORMATION.12 13 Most importantly, IF THE INFORMATION IS NOT PRESENT IN THE CONVERSATION HISTORY, DO NOT MAKE UP AN ANSWER.'''14 response = client.chat.completions.create(15 model=GPT_MODEL,16 messages=[17 {"role": "system", "content": "You are a helpful assistant who is answering questions about an article."},18 {"role": "user", "content": prompt}19 ],20 temperature=temp21 )22 response_message = response.choices[0].message.content23 return response_message
In [26]:
1query = input('Ask me a question about video games!')2ask_wiki_page(query)

Details
About this Template
Simulate searching through 10 million paragraphs in Wikipedia for information about video games, and generate answers using RAG!
This Notebook can be run in Standard and Enterprise deployments.
Tags
License
This Notebook has been released under the Apache 2.0 open source license.
See Notebook in action
Launch this notebook in SingleStore and start executing queries instantly.