Pyversity is a quick, light-weight Python library designed to enhance the range of outcomes from retrieval techniques. Retrieval typically returns objects which might be very related, resulting in redundancy. Pyversity effectively re-ranks these outcomes to floor related however much less redundant objects.
It gives a transparent, unified API for a number of widespread diversification methods, together with Maximal Marginal Relevance (MMR), Max-Sum-Diversification (MSD), Determinantal Level Processes (DPP), and Cowl. Its solely dependency is NumPy, making it very light-weight.
On this tutorial, we’ll concentrate on the MMR and MSD methods utilizing a sensible instance. Try the FULL CODES right here.
Diversification in retrieval is critical as a result of conventional rating strategies, which prioritize solely relevance to the consumer question, often produce a set of high outcomes which might be extremely redundant or near-duplicates.
This excessive similarity creates a poor consumer expertise by limiting exploration and losing display screen house on practically an identical objects. Diversification methods tackle this by balancing relevance with selection, making certain that newly chosen objects introduce novel data not current within the objects already ranked.
This method is essential throughout varied domains: in E-commerce, it exhibits totally different product kinds; in Information search, it surfaces totally different viewpoints or sources; and in RAG/LLM contexts, it prevents feeding the mannequin repetitive, near-duplicate textual content passages, enhancing the standard of the general response. Try the FULL CODES right here.

pip set up openai numpy pyversity scikit-learn
import os
from openai import OpenAI
from getpass import getpass
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
consumer = OpenAI()
On this step, we’re simulating the type of search outcomes you may retrieve from a vector database (like Pinecone, Weaviate, or FAISS) after performing a semantic seek for a question comparable to “Sensible and constant canines for household.”
These outcomes deliberately include redundant entries — a number of mentions of comparable breeds like Golden Retrievers, Labradors, and German Shepherds — every described with overlapping traits comparable to loyalty, intelligence, and family-friendliness.
This redundancy mirrors what typically occurs in real-world retrieval techniques, the place extremely related objects obtain excessive similarity scores. We’ll use this dataset to exhibit how diversification methods can cut back repetition and produce a extra balanced, various set of search outcomes. Try the FULL CODES right here.
import numpy as np
search_results = [
"The Golden Retriever is the perfect family companion, known for its loyalty and gentle nature.",
"A Labrador Retriever is highly intelligent, eager to please, and makes an excellent companion for active families.",
"Golden Retrievers are highly intelligent and trainable, making them ideal for first-time owners.",
"The highly loyal Labrador is consistently ranked number one for US family pets due to its stable temperament.",
"Loyalty and patience define the Golden Retriever, one of the top family dogs globally and easily trainable.",
"For a smart, stable, and affectionate family dog, the Labrador is an excellent choice, known for its eagerness to please.",
"German Shepherds are famous for their unwavering loyalty and are highly intelligent working dogs, excelling in obedience.",
"A highly trainable and loyal companion, the German Shepherd excels in family protection roles and service work.",
"The Standard Poodle is an exceptionally smart, athletic, and surprisingly loyal dog that is also hypoallergenic.",
"Poodles are known for their high intelligence, often exceeding other breeds in advanced obedience training.",
"For herding and smarts, the Border Collie is the top choice, recognized as the world's most intelligent dog breed.",
"The Dachshund is a small, playful dog with a distinctive long body, originally bred in Germany for badger hunting.",
"French Bulldogs are small, low-energy city dogs, known for their easy-going temperament and comical bat ears.",
"Siberian Huskies are energetic, friendly, and need significant cold weather exercise due to their running history.",
"The Beagle is a gentle, curious hound known for its excellent sense of smell and a distinctive baying bark.",
"The Great Dane is a very large, gentle giant breed; despite its size, it's known to be a low-energy house dog.",
"The Australian Shepherd (Aussie) is a medium-sized herding dog, prized for its beautiful coat and sharp intellect."
]
def get_embeddings(texts):
"""Fetches embeddings from the OpenAI API."""
print("Fetching embeddings from OpenAI...")
response = consumer.embeddings.create(
mannequin="text-embedding-3-small",
enter=texts
)
return np.array([data.embedding for data in response.data])
embeddings = get_embeddings(search_results)
print(f"Embeddings form: {embeddings.form}")
On this step, we calculate how carefully every search consequence matches the consumer’s question utilizing cosine similarity between their vector embeddings. This produces a ranked checklist of outcomes purely primarily based on semantic relevance, exhibiting which texts are most related in which means to the question. Primarily, it simulates what a search engine or retrieval system would return earlier than making use of any diversification methods, typically leading to a number of extremely related or redundant entries on the high. Try the FULL CODES right here.
from sklearn.metrics.pairwise import cosine_similarity
query_text = "Sensible and constant canines for household"
query_embedding = get_embeddings([query_text])[0]
scores = cosine_similarity(query_embedding.reshape(1, -1), embeddings)[0]
print("n--- Preliminary Relevance-Solely Rating (Prime 5) ---")
initial_ranking_indices = np.argsort(scores)[::-1] # Kind descending
for i in initial_ranking_indices[:5]:
print(f"Rating: {scores[i]:.4f} | End result: {search_results[i]}")


As seen within the output above, the highest outcomes are dominated by a number of mentions of Labradors and Golden Retrievers, every described with related traits like loyalty, intelligence, and family-friendliness. That is typical of a relevance-only retrieval system, the place the highest outcomes are semantically related however typically redundant, providing little range in content material. Whereas these outcomes are all related to the question, they lack selection — making them much less helpful for customers who need a broader overview of various breeds or views. Try the FULL CODES right here.
MMR works by discovering a steadiness between relevance and variety. As an alternative of merely selecting probably the most related outcomes to the question, it progressively selects objects which might be nonetheless related however not too just like what’s already been chosen.
In easier phrases, think about you’re constructing a listing of canine breeds for “good and constant household canines.” The primary consequence could be a Labrador — extremely related. For the following selection, MMR avoids selecting one other Labrador description and as an alternative selects one thing like a Golden Retriever or German Shepherd.
This fashion, MMR ensures your remaining outcomes are each helpful and diversified, decreasing repetition whereas holding every part carefully associated to what the consumer really looked for. Try the FULL CODES right here.
from pyversity import diversify, Technique
# MMR: Focuses on novelty in opposition to already picked objects.
mmr_result = diversify(
embeddings=embeddings,
scores=scores,
okay=5,
technique=Technique.MMR,
range=0.5 # 0.0 is pure relevance, 1.0 is pure range
)
print("nn--- Diversified Rating utilizing MMR (Prime 5) ---")
for rank, idx in enumerate(mmr_result.indices):
print(f"Rank {rank+1} (Unique Index {idx}): {search_results[idx]}")


After making use of the MMR (Maximal Marginal Relevance) technique, the outcomes are noticeably extra various. Whereas the top-ranked objects just like the Labrador and German Shepherd stay extremely related to the question, the following entries embody totally different breeds comparable to Siberian Huskies and French Bulldogs. This exhibits how MMR reduces redundancy by avoiding a number of related outcomes — as an alternative, it balances relevance and selection, giving customers a broader and extra informative set of outcomes that also keep on matter. Try the FULL CODES right here.
The MSD (Max Sum of Distances) technique focuses on choosing outcomes that aren’t solely related to the question but additionally as totally different from one another as attainable. As an alternative of worrying about similarity to beforehand picked objects one after the other (like MMR does), MSD appears on the total unfold of the chosen outcomes.
In easier phrases, it tries to select outcomes that cowl a wider vary of concepts or subjects, making certain robust range throughout your entire set. So, for a similar canine instance, MSD may embody breeds like Labrador, German Shepherd, Beagle, and Husky — every distinct in sort and temperament — to provide a broader, well-rounded view of “good and constant household canines.” Try the FULL CODES right here.
# MSD: Focuses on robust unfold/distance throughout all candidates.
msd_result = diversify(
embeddings=embeddings,
scores=scores,
okay=5,
technique=Technique.MSD,
range=0.5
)
print("nn--- Diversified Rating utilizing MSD (Prime 5) ---")
for rank, idx in enumerate(msd_result.indices):
print(f"Rank {rank+1} (Unique Index {idx}): {search_results[idx]}")


The outcomes produced by the MSD (Max Sum of Distances) technique present a robust concentrate on selection and protection. Whereas the Labrador and German Shepherd stay related to the question, the inclusion of breeds just like the French Bulldog, Siberian Husky, and Dachshund highlights MSD’s tendency to pick out outcomes which might be distinct from each other.
This method ensures that customers see a broader mixture of choices relatively than carefully associated or repetitive entries. In essence, MSD emphasizes most range throughout your entire consequence set, providing a wider perspective whereas nonetheless sustaining total relevance to the search intent.
Try the FULL CODES right here. Be at liberty to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be at liberty to observe us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be part of us on telegram as nicely.

I’m a Civil Engineering Graduate (2022) from Jamia Millia Islamia, New Delhi, and I’ve a eager curiosity in Knowledge Science, particularly Neural Networks and their software in varied areas.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s tendencies immediately: learn extra, subscribe to our publication, and change into a part of the NextTech neighborhood at NextTech-news.com

