New

Image Matching with SQL

Notebook


SingleStore Notebooks

Image Matching with SQL

Note

This notebook can be run on a Free Starter Workspace. To create a Free Starter Workspace navigate to Start using the left nav. You can also use your existing Standard or Premium workspace with this Notebook.

SingleStoreDB can supercharge your apps with AI!

In this notebook, we’ll demonstrate how we use the dot_product function (for cosine similarity) to find a matching image of a celebrity from among 7 thousand records in just 3 milliseconds!

Efficient retrieval of high-dimensional vectors and handling of large-scale vector similarity matching workloads are made possible by SingleStore’s distributed architecture and efficient low-level execution. SingleStoreDB powers many AI applications including face matching, product photo matching, object recognition, text similarity matching, and sentiment analysis.

1. Create a workspace in your workspace group

S-00 is sufficient.

Action Required

If you have a Free Starter Workspace deployed already, select the database from drop-down menu at the top of this notebook. It updates the connection_url to connect to that database.

2. Create a Database named image_recognition

The code below will drop the current image_recognition database and create a fresh one.

In [1]:

shared_tier_check = %sql show variables like 'is_shared_tier'
if not shared_tier_check or shared_tier_check[0][1] == 'OFF':
%sql DROP DATABASE IF EXISTS image_recognition;
%sql CREATE DATABASE image_recognition;

Action Required

Make sure to select the image_recognition database from the drop-down menu at the top of this notebook. It updates the connection_url which is used by the %%sql magic command and SQLAlchemy to make connections to the selected database.

3. Install and import the following libraries

This will take approximately 40 seconds. We are using the --quiet option of pip here to keep the log messages from filling the output. You can remove that option if you want to see the installation process.

You may see messages printed about not being able to find cuda drivers or TensorRT. These can be ignored.

In [2]:

!pip3 install boto3 matplotlib tensorflow opencv-python-headless --quiet
import json
import os
import random
import urllib.request
import boto3
import cv2
import botocore.exceptions
import ipywidgets as widgets
import tensorflow.compat.v1 as tf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import singlestoredb as s2
import tensorflow.compat.v1 as tf
from botocore import UNSIGNED
from botocore.client import Config
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
tf.disable_v2_behavior()

4. Create a table of images of people

The table will contain two columns: 1) the filename containing the image and 2) the vector embedding of the image as a blob containing an array of 32-bit floats.

In [3]:

%%sql
CREATE TABLE people /* Creating table for sample data. */(
filename VARCHAR(255),
vector BLOB,
SHARD(filename)
);

5. Import our sample dataset into the table

This dataset has 7000 vector embeddings of celebrities!

Note that we are using the converters= parameter of pd.read_csv to parse the text as a JSON array and convert it to a numpy array for the resulting DataFrame column.

In [4]:

url = 'https://raw.githubusercontent.com/singlestore-labs/singlestoredb-samples/main/' + \
'Tutorials/Face%20matching/celebrity_data.sql'

In [5]:

def json_to_numpy_array(x: str | None) -> np.ndarray | None:
"""Convert JSON array string to numpy array."""
return np.array(json.loads(x), dtype='f4') if x else None
# Read data into DataFrame
df = pd.read_csv(url, sep='"', usecols=[1, 3], names=['filename', 'vector'],
converters=dict(vector=json_to_numpy_array))
# Create database connection
conn = s2.create_engine().connect()
# Upload DataFrame
df.to_sql('people', con=conn, index=False, if_exists='append')

6. Run our image matching algorithm using just 2 lines of SQL

In this example, we use an image of Adam Sandler and find the 5 closest images in our database to it. We use the dot_product function to measure cosine_similarity of each vector in the database to the input image.

In [6]:

%%sql
SET @v = (SELECT vector FROM people WHERE filename = "Adam_Sandler/Adam_Sandler_0003.jpg");
SELECT filename, DOT_PRODUCT(vector, @v) AS score FROM people ORDER BY score DESC LIMIT 5;

7. Pick an image of a celebrity and see which images matched closest to it!

  1. Run the code cell

  2. Pick a celebrity picture

  3. Wait for the match!

In [7]:

s3 = boto3.resource('s3', region_name='us-east-1', config=Config(signature_version=UNSIGNED))
bucket = s3.Bucket('studiotutorials')
prefix = 'face_matching/'
peoplenames = %sql SELECT filename FROM people ORDER BY filename;
names = [x[0] for x in peoplenames]
out = widgets.Output(layout={'border': '1px solid black'})
def on_value_change(change: widgets.Output) -> None:
"""Handle a value change event on a drop-down menu."""
with out:
out.clear_output();
selected_name = change.new
countdb = %sql SELECT COUNT(*) FROM people WHERE filename = '{{selected_name}}';
if int(countdb[-1][0]) > 0:
%sql SET @v = (SELECT vector FROM people WHERE filename = '{{selected_name}}');
result = %sql SELECT filename, DOT_PRODUCT(vector, @v) AS score FROM people ORDER BY score DESC LIMIT 5;
original = "original.jpg"
images = []
matches = []
try:
bucket.download_file(prefix + selected_name, original)
images.append(original)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
bucket.download_file(prefix + "error.jpg", original)
else:
raise
cnt = 0
for res in result:
print(res)
temp_file = "match" + str(cnt) + ".jpg"
images.append(temp_file)
matches.append(res[1])
try:
bucket.download_file(prefix + res[0], temp_file)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
bucket.download_file(prefix + "error.jpg", temp_file)
else:
raise
cnt += 1
fig, axes = plt.subplots(nrows=1, ncols=6, figsize=(40, 40))
for i in range(6):
axes[i].imshow(plt.imread(images[i]))
axes[i].set_xticks([])
axes[i].set_yticks([])
axes[i].set_xlabel('')
axes[i].set_ylabel('')
if i == 0:
axes[i].set_title("Original Image", fontsize=14)
else:
axes[i].set_title("Match " + str(i) + ". Score: " + str(matches[i-1]), fontsize=14)
plt.show()
else:
print("No match for this image as it was not inserted into the People Table")
dropdown = widgets.Dropdown(
options=names,
description='Select an Image:',
placeholder='Select an Image!',
style={'description_width': 'initial'},
layout={'width': 'max-content'}
)
display(dropdown)
dropdown.observe(on_value_change, names='value')
display(out)

8. See which celebrity you look most like!

In this step, you'll need to upload a picture of yourself. Note that your image MUST be at least 160x160 pixels. Head-shots and zoomed-in photos work better as we don't preprocess the image to just isolate the facial context! We only have 7,000 pictures so matching might be limited.

  1. Run the code cell

  2. Upload your picture

  3. Wait for the match!

A low score for matching is less than 0.6.

In [8]:

def prewhiten(x: np.ndarray) -> np.ndarray:
"""Prewhiten image data."""
mean = np.mean(x)
std = np.std(x)
std_adj = np.maximum(std, 1.0 / np.sqrt(x.size))
y = np.multiply(np.subtract(x, mean), 1 / std_adj)
return y
def crop(image: np.ndarray, random_crop: bool, image_size: int) -> np.ndarray:
"""Crop an image to a given size."""
if image.shape[1] > image_size:
sz1 = int(image.shape[1] // 2)
sz2 = int(image_size // 2)
if random_crop:
diff = sz1 - sz2
(h, v) = (np.random.randint(-diff, diff + 1), np.random.randint(-diff, diff + 1))
else:
(h, v) = (0, 0)
image = image[(sz1 - sz2 + v):(sz1 + sz2 + v), (sz1 - sz2 + h):(sz1 + sz2 + h), :]
return image
def flip(image: np.ndarray, random_flip: bool) -> np.ndarray:
"""Flip the image data left-to-right."""
if random_flip and np.random.choice([True, False]):
image = np.fliplr(image)
return image
def load_data(
image_paths: list[str],
do_random_crop: bool,
do_random_flip: bool,
image_size: int,
do_prewhiten: bool=True,
) -> np.ndarray:
nrof_samples = len(image_paths)
images = np.zeros((nrof_samples, image_size, image_size, 3))
for i in range(nrof_samples):
img = cv2.imread(image_paths[i])
if do_prewhiten:
img = prewhiten(img)
img = crop(img, do_random_crop, image_size)
img = flip(img, do_random_flip)
images[i, :, :, :] = img
return images
new_out= widgets.Output(layout={'border': '1px solid black'})
s3 = boto3.resource('s3', region_name='us-east-1', config=Config(signature_version=UNSIGNED))
bucket = s3.Bucket('studiotutorials')
prefix = 'face_matching/'
names=[]
local_folder = './face_matching_models'
if not os.path.exists(local_folder):
os.makedirs(local_folder)
s3 = boto3.client('s3', region_name='us-east-1', config=Config(signature_version=UNSIGNED))
s3.download_file('studiotutorials', 'face_matching_models/20170512-110547.pb',
os.path.join(local_folder, '20170512-110547.pb'))
pb_file_path = './face_matching_models/20170512-110547.pb'
# Load the .pb file into a graph
with tf.io.gfile.GFile(pb_file_path, 'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
def handle_upload(change: widgets.Output) -> None:
with new_out:
new_out.clear_output();
new_file_name=''
# Get the uploaded file
uploaded_file = change.new
if uploaded_file[0]['name'].lower().endswith(('.png', '.jpg', '.jpeg')):
# Do something with the uploaded file
file_name = uploaded_file[0]['name']
random_number = random.randint(1, 100000000)
new_file_name = f"{file_name.split('.')[0]}_{random_number}.{file_name.split('.')[-1]}"
file_content = uploaded_file[0]['content']
with open(new_file_name, 'wb') as f:
f.write(file_content)
with tf.compat.v1.Session() as sess:
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
images_placeholder = sess.graph.get_tensor_by_name("input:0")
embeddings = sess.graph.get_tensor_by_name("embeddings:0")
phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")
phase_train = False
img = load_data([new_file_name], False, False, 160)
feed_dict = {
images_placeholder: img,
phase_train_placeholder: phase_train,
}
embeddings_ = sess.run(embeddings, feed_dict=feed_dict)
embeddings_list = [float(x) for x in embeddings_[0]]
embeddings_json = json.dumps(embeddings_list)
%sql insert into people values('{{new_file_name}}', json_array_pack_f32("{{embddings_json}}"));
else:
print("Upload a .png, .jpg or .jpeg image")
num_matches = 5
countdb = %sql SELECT COUNT(*) FROM people WHERE filename = '{{new_file_name}}';
if int(countdb[-1][0]) > 0:
%sql SET @v = (SELECT vector FROM people WHERE filename = '{{new_file_name}}');
result = %sql SELECT filename, DOT_PRODUCT(vector, @v) AS score FROM people ORDER BY score DESC LIMIT 5;
images = []
matches = []
images.append(new_file_name)
cnt = 0
for res in result:
print(res)
if (cnt == 0):
temp_file = new_file_name
else:
temp_file = "match" + str(cnt) + ".jpg"
try:
bucket.download_file(prefix + res[0], temp_file)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
bucket.download_file(prefix + "error.jpg", temp_file)
else:
raise
images.append(temp_file)
matches.append(res[1])
cnt += 1
fig, axes = plt.subplots(nrows=1, ncols=num_matches+1, figsize=(40, 40))
%sql DELETE FROM people WHERE filename = '{{new_file_name}}';
for i in range(num_matches+1):
axes[i].imshow(plt.imread(images[i]))
axes[i].set_xticks([])
axes[i].set_yticks([])
axes[i].set_xlabel('')
axes[i].set_ylabel('')
if i == 0:
axes[i].set_title("Original Image", fontsize=14)
else:
axes[i].set_title("Match " + str(i) + ". Score: " + str(matches[i-1]), fontsize=14)
plt.show()
else:
print("No match for this image as it was not inserted into the People Database")
upload_button = widgets.FileUpload()
display(upload_button)
upload_button.observe(handle_upload, names='value')
display(new_out)

9. Clean up

Action Required

If you created a new database in your Standard or Premium Workspace, you can drop the database by running the cell below. Note: this will not drop your database for Free Starter Workspaces. To drop a Free Starter Workspace, terminate the Workspace using the UI.

In [9]:

shared_tier_check = %sql show variables like 'is_shared_tier'
if not shared_tier_check or shared_tier_check[0][1] == 'OFF':
%sql DROP DATABASE IF EXISTS image_recognition;

Details


About this Template

Facial recognition using dot_product function on vectors stored in SingleStoreDB.

Notebook Icon

This Notebook can be run in Shared Tier, Standard and Enterprise deployments.

Tags

startervectordbgenaifacenet

License

This Notebook has been released under the Apache 2.0 open source license.