Embeddings & Vector Databases
Transform text into semantic vectors and build intelligent search, recommendations, and RAG applications with your own data.
Transform text into semantic vectors and build intelligent search, recommendations, and RAG applications with your own data.
Numerical representations of text that capture semantic meaning and enable intelligent applications.
Text embeddings are high-dimensional numerical vectors that capture the semantic meaning of text. Words, sentences, or entire documents are transformed into arrays of floating-point numbers where similar meanings are represented by vectors that are close together in the vector space.
Using OpenAI embedding models, you can generate these vectors from any text input. Once generated, embeddings enable powerful capabilities like similarity search, clustering, and retrieval-augmented generation (RAG) — all from within your Delphi application.
Store, query, and manage embedding vectors at scale with native Pinecone support.
sgcWebSockets includes a native Delphi component for the Pinecone vector database. Store millions of embedding vectors and perform lightning-fast similarity searches with metadata filtering. No external dependencies or Python bridges required.
Embeddings unlock powerful AI capabilities for your applications.
Search by meaning, not just keywords. Find relevant content even when users use different wording, synonyms, or phrasing than the original text.
Ground AI responses in your own data. Retrieve relevant documents before generating answers to reduce hallucinations and improve accuracy.
Suggest similar content, products, or documents based on semantic similarity. Build personalized recommendation engines powered by vector search.
Automatically categorize text into predefined groups using vector similarity. Classify support tickets, documents, or user feedback without training custom ML models.
Generate embeddings for semantic search and RAG applications with just a few lines of Delphi code.
procedure TForm1.GenerateEmbedding;
begin
sgcAIOpenAI1.ApiKey := 'sk-your-api-key';
sgcAIOpenAI1.Embeddings.Model := 'text-embedding-3-small';
sgcAIOpenAI1.Embeddings.Input := 'WebSocket real-time communication';
sgcAIOpenAI1.Embeddings.CreateEmbeddings;
end;
procedure TForm1.sgcAIOpenAI1EmbeddingsResponse(Sender: TObject;
const Response: TsgcAIOpenAIEmbeddingsResponse);
begin
// Store embedding vector in Pinecone for semantic search
Memo1.Lines.Add('Dimensions: ' + IntToStr(Length(Response.Data[0].Embedding)));
end;
Combine embeddings with Pinecone vector storage to build Retrieval Augmented Generation (RAG) pipelines. Index your documents as vectors, then retrieve the most relevant context before sending queries to an LLM — reducing hallucinations and grounding responses in your own data.
Deep-link to the component reference, grab the ready-to-run demo project, and download the trial.