Enterprise AI
October 11, 2024

Semantic Chunking for RAG: Better Context, Better Results

Explore how semantic chunking enhances RAG systems by improving context, precision, and performance through optimized chunking strategies and advanced tools.
Grab your AI use cases template
Icon Rounded Arrow White - BRIX Templates
Oops! Something went wrong while submitting the form.
Semantic Chunking for RAG: Better Context, Better Results
This post is a part of our series on making your data AI-ready. Read the previous post for a more high-level overview of chunking documents for RAG.

Key Takeaways

  • Semantic chunking improves Retrieval Augmented Generation (RAG) system performance by splitting documents into meaningful, contextually relevant chunks.
  • Optimized chunk size ensures that each chunk captures a clear, single idea, preventing information dilution and improving response accuracy.
  • Advanced tools like Unstructured AI help automate chunking strategy, handling complex documents, and integrating non-textual elements.

Intro to Chunking for RAG

Chunking involves splitting a document into smaller pieces—or chunks—that can be embedded and stored as vectors. These vectors are later retrieved by the Retrieval Augmented Generation (RAG) system to provide context when responding to user queries.

There are several different chunking strategies:

  • Fixed-size chunking
  • Recursive chunking
  • Document-specific chunking
  • Semantic chunking
  • Agentic chunking

But not all chunking is created equal. A common yet simplistic approach is to chunk documents based on arbitrary size—splitting text after a certain number of tokens (characters or words). 

While straightforward, this method can result in chunks that cut off important context, such as splitting a sentence or paragraph mid-thought. This leads to RAG systems generating responses based on fragmented or incomplete information. That’s where the semantic chunking method comes into play.

We have already written in more detail about what chunking documents for RAG is and how to do it, so in this article, we will dive deeper into the semantic chunking strategy.

So, What Exactly Is Semantic Chunking?

Semantic chunking, sometimes called intelligent chunking, focuses on preserving the document's meaning and structure. Instead of using a fixed chunk size, it strategically divides the document at meaningful breakpoints—like paragraphs, sentences, or thematically linked sections. 

This ensures that each chunk maintains a semantically coherent idea, making it far more useful when a RAG model uses it to generate responses or retrieve relevant information. 

By embedding entire concepts instead of fragments, semantic chunking strengthens the context a RAG system uses, resulting in more precise and coherent outputs.

quote saying Semantic chunking focuses on embedding conciseness, meaning taht each chunk is optimized to capture a single, clear meaning

For instance, a single chunk might cover an entire explanation of a concept rather than splitting it into separate pieces. This ensures that each concept is represented in its own chunk, avoiding the issue of conveying multiple meanings within a single chunk. Combining unrelated ideas in one chunk can dilute the effectiveness of the RAG model’s embeddings. 

When a RAG system retrieves a semantically chunked piece of information, it pulls a complete thought, reducing the chance of fragmented or unclear responses. By contrast, a naïve chunking approach might leave the system trying to interpret chunks that contain multiple, sometimes unrelated, meanings.

The Benefits of Semantic Chunking

The advantages of this chunking strategy go beyond simply keeping chunks intact—it enhances the overall performance and accuracy of the entire RAG system. Below are some of the key benefits:

graphic with benefits of semantic chunking as listed in the article

1. Coherent Context for Responses

One of the greatest benefits of semantic chunking is the coherence it brings to the RAG system’s context. 

When a user asks a question, the system pulls chunks of information to formulate its answer. If these chunks are incomplete or nonsensical, the model’s answer will reflect that confusion. 

By using semantically coherent chunks, the system retrieves complete thoughts and gives more accurate responses.

2. Better Alignment with User Queries

Users often query a system with questions that require specific, detailed responses. With arbitrary chunking, the RAG model might pull random sections of a document that aren’t closely related to the user’s intent. 

Semantic chunking solves this problem by providing chunks that more effectively match the user’s query, as the chunks are built around full concepts and thematic elements.

3. Enhanced Embedding Precision

Semantic chunking improves embedding precision by capturing a single, clear meaning within each chunk. This prevents the dilution of meaning that happens when multiple topics are crammed into one chunk, leading to more accurate and useful embeddings for the RAG system.

4. Optimized Handling of Large Documents

Large documents can challenge RAG systems due to the limited context window. Semantic chunking helps maximize this space by ensuring the system focuses on coherent, relevant sections rather than incomplete fragments or noise. This allows the RAG system to process longer documents more efficiently and deliver better results.

5. Reduced Noise and Improved Context

By focusing on splitting documents at semantically appropriate points, semantic chunking significantly enhances the quality of the context that the RAG system can retrieve. 

For example, if a document is chunked at paragraph boundaries or after logical sections, the RAG system can pull far more relevant information to the user's query. This avoids issues of mid-sentence or mid-paragraph splitting, which can lead to fragmented responses.

6. Optimized Chunk Size for Performance

Finding the optimum chunk size is crucial for ensuring that each chunk carries a single, clear meaning. 

Overly large chunks tend to introduce multiple meanings, which can confuse the RAG system. 

By using smaller, optimized chunks, the system retrieves only the most relevant and concise pieces of information, improving accuracy and efficiency.

7. Improved Interpretability and Debugging

Semantically coherent chunks are easier to interpret and debug. If a system retrieves poor-quality information, it’s simpler to trace where the problem originated. Since each chunk is a coherent unit, you can quickly identify which chunk caused an issue, making it easier to optimize the system.

8. Reduced Computational Load

Arbitrary chunking can lead to the model processing an excessive amount of unnecessary data, increasing the computational load without delivering better results. On the other hand, semantic chunking reduces this load by focusing on the most relevant information in each chunk. 

This improves efficiency, allowing your system to process documents faster and with fewer resources, while maintaining high-quality output.

9. Improved RAG System Responsiveness

The overall responsiveness of a RAG system is closely tied to how well it retrieves context. Semantic chunking allows the system to pull concise and meaningful chunks, ensuring that responses are not only quicker but also more accurate. 

10. Flexibility in Handling Complex Queries

One of the major challenges in RAG is addressing complex queries that require the system to pull from multiple sections of a document. With semantic chunking, each chunk is already a coherent unit, making it easier for the system to match the query with the most relevant information. 

This significantly improves the precision of answers, especially when dealing with user queries that span multiple concepts.

When Should You Use Semantic Chunking?

Not every document requires semantic chunking, but knowing when to use it can be a game-changer for the performance of your RAG system. 

If you’re working with documents that have varied structures—like long reports, research papers, or user manuals—semantic chunking can vastly improve the quality of information retrieval.

Here are a few scenarios when you should consider semantic chunking:

Complex Documents

Documents with a mix of text, headings, tables, and charts particularly benefit from semantic chunking. In these cases, it's important to group related information, not just by sentence or paragraph but also by their logical connection. 

For example, breaking a section with a heading, subheading, and supporting paragraphs into one semantically coherent chunk ensures that an RAG system understands this group as a unit rather than unrelated fragments.

Unstructured AI automates this process by converting these elements into text and integrating them meaningfully into chunks. 

This allows the RAG system to access this rich information and provide more detailed, context-aware answers to user queries.

You can see that the text is grouped together in semantically coherent units, rather than into equal-sized chunks.

Long Documents with Multiple Themes

Imagine working with a book or a large technical document. Arbitrary chunking based on size will result in awkward breaks, often cutting off sentences or thoughts that continue in the next chunk. This hinders the RAG model's ability to answer user queries effectively, as it may pull from incomplete information. 

Semantic chunking splits larger documents based on logical sections, ensuring that each chunk fully represents the idea it is meant to convey.

Dense Documents

Similar to long documents, dense documents often present several ideas or topics, even within a single section. Using a naïve chunking strategy risks embedding multiple ideas into one chunk, which leads to a diluted understanding of the text. 

Newspaper articles are a perfect example. A single section might contain multiple meanings or shifts in the narrative, making it important to chunk based on meaning rather than arbitrary size.

When Precision is Key

In applications where precision is vital—like legal or medical document analysis—semantic chunking ensures that the RAG system retrieves and interprets information correctly. 

A poorly chunked document may cause the model to provide an incomplete or misleading response, which is especially problematic in highly regulated systems like banking.

Ongoing Optimization Efforts

Semantic chunking is also essential when you're fine-tuning your RAG system for optimal performance. As you continue to optimize retrieval and generation processes, keeping chunks meaningful ensures that each model call retrieves highly relevant and contextually accurate information. 

This is crucial as RAG systems evolve and integrate more sophisticated methods for interacting with large language models (LLMs).

When Embedding Efficiency is Critical

As mentioned, semantic chunking offers a clear advantage in scenarios where computational load and system efficiency are important.

Since semantically coherent chunks are more concise and relevant, they reduce the strain on the embedding model, allowing for faster and more accurate information retrieval. By embedding only what’s necessary, the system reduces its overall computational load without sacrificing the quality of responses.

How Semantic Chunking Works

Performing semantic chunking isn’t about setting a simple size parameter—it requires a thoughtful approach to understanding the document’s structure and content. We suggest the following steps for semantic chunking, along with some tools that can help.

Step 1: Document Preparation

Before chunking can begin, it’s essential to prepare the document. This involves:

  • Cleaning and preprocessing the text to remove unnecessary formatting, such as extra spaces, line breaks, or non-relevant content.
  • Identifying the document structure, such as headings, paragraphs, and sections, to understand how the content is organized and where logical breakpoints might be.

Step 2: Chunking Strategy Selection

Choose the right chunking strategy based on your use case:

  • Sentence-based chunking: Ideal for smaller, granular chunks that maintain sentence-level context.
  • Paragraph-based chunking: Maintains coherence at a higher level and is useful for longer thoughts or sections.
  • Fixed-size with semantic awareness: Divides the document into equally sized chunks but ensures the breakpoints align with semantic meaning.
  • Hybrid approaches: Combine the above strategies based on document complexity.

Step 3: Document Analysis for Breakpoints

Analyze the document to identify natural breakpoints with logical points for splitting the text:

  • Section headings, 
  • Paragraph boundaries, 
  • Topical shifts.

This step ensures that each chunk reflects a coherent concept rather than arbitrary slices of text. It’s crucial to preserve context between chunks to maintain the integrity of the information.

Step 4: Chunk Creation

Once you identify the breakpoints, create the chunks:

  • Split the text at these boundaries to ensure the chunks are coherent and represent complete thoughts or sentences.
  • Each chunk should maintain relevant context and stay thematically consistent.

Step 5: Chunk Overlap Implementation for Context Preservation

Add a 10-20% chunk overlap, repeating part of the end of one chunk at the beginning of the next. This technique:

  • Ensures that contextual continuity is preserved across chunk boundaries.
  • Prevents fragmentation of important information, particularly in longer or more complex documents.

Step 6: Metadata Attachment

Attach metadata to each chunk to facilitate retrieval and management. This includes:

  • Identifiers to track the chunk’s origin.
  • Position information—to know where the chunk is located in the document.
  • A reference to the parent document for clarity and organization.

Step 7: Optimizing Desired Chunk Size

Finding the right chunk size is critical for effective semantic chunking:

  • Smaller chunks often perform better as they avoid embedding multiple meanings into a single vector.
  • The goal is to find the optimum chunk size where a clear, single meaning is maintained, balancing granularity with coherence. Testing and fine-tuning based on the document type may be required.

Step 8: Embedding the Chunks

Once you split the document into coherent, semantically rich chunks, generate embeddings for each chunk using a suitable model (e.g., OpenAI’s text-embedding-ada-002).

  • Ensure that the right chunking strategy aligns with the embedding model to facilitate more precise retrieval in RAG systems.

Step 9: Validation and Adjustment

After creating the chunks, validate their quality:

  • Review the chunks for semantic integrity—each should convey a complete idea.
  • Adjust the chunking strategy if necessary to ensure accuracy and relevance. If any chunks are incomplete or non-cohesive, refine the chunking process.
graphic with listed tips for better chunking

This way, you can ensure that the resulting chunks are meaningful and optimized for retrieval in systems like RAG.

Using the Right Tools

Tools like Unstructured AI offer a powerful way to perform semantic chunking by converting unstructured documents into coherent chunks ready for RAG systems. These tools can also handle non-textual elements, ensuring that every part of a document is accounted for during chunking and embedding. 

Additionally, platforms like Pinecone provide storage and retrieval capabilities that integrate seamlessly with semantic chunking methods.

Other tools for semantic chunking include:

  • LangChain
  • Azure AI Document Intelligence
  • LlamaIndex
  • Semantic Text Splitter libraries
  • Hugging Face Transformers

Advanced Chunking Techniques

Chunking documents for RAG involves several advanced methods, too, each tailored to different types of content and performance goals. These strategies include:

  • statistical chunking, 
  • consecutive chunking, 
  • and cumulative chunking.

1. Statistical Chunking

This approach adjusts chunk boundaries based on semantic differences in the text. Instead of using fixed sizes, it splits the document when the content's meaning shifts. Some sub-methods include:

  • Percentile-based chunking: Splits occur when differences between sentences exceed a set percentile, adjusting to the document’s natural breaks.
  • Standard deviation-based chunking: Chunks form when semantic differences go beyond a certain number of standard deviations, isolating major content shifts.
  • Interquartile-based chunking: This method splits text using the interquartile range, focusing on significant differences while ignoring minor variations.

If you are interested in the very technical aspect (with codes) you can look at this detailed LangChain guide on splitting text based on semantic similarity.

2. Consecutive Chunking

This method is simpler and splits documents into sequential chunks without overlap. It works best for texts where each section holds independent meaning and doesn’t require complex chunking techniques.

3. Cumulative Chunking

Cumulative chunking gathers text into chunks until it reaches a semantic threshold. This method is ideal for complex documents where text sections build on each other, ensuring each chunk represents a complete idea without prematurely cutting off important content.

Conclusion

Semantic chunking enables RAG systems to perform more effectively. It ensures that the chunks of information are meaningful and contextually relevant. By focusing on concise embeddings, optimizing chunk sizes, and cutting out noise, it significantly enhances response quality. 

This method groups similar information into coherent segments, giving LLMs focused inputs that improve their ability to process and understand language. When used effectively, semantic chunking sharpens system accuracy, speeds up responses, and delivers more reliable answers to complex queries.

Optimize Your RAG System with Semantic Chunking

Looking to improve the performance of your RAG system with advanced chunking techniques? Schedule a free 30-minute call with our experts to see how Unstructured AI can help you automate this process. 

In this article

Schedule a free,
30-minute call

Explore how our AI Agents can help you unlock enterprise-wide automation.

See how AI Agents work in real time

Learn how to apply them to your business

Discuss pricing & project roadmap

Get answers to all your questions