eSeGeCe
software
sgcWebSockets 2026.4.0 introduces a major expansion of the OpenAI API integration, bringing full support for the new Responses API (the official replacement for the deprecated Assistants API), Audio Speech text-to-speech, Fine-Tuning Jobs management, the Batch API for asynchronous bulk processing, the Uploads API for large file handling, and modernized Chat Completions with tool calling and structured output support. This article covers every new method, its parameters, and practical Delphi code examples.
The Responses API is the most significant addition in this release. It replaces the deprecated Assistants API with a
streamlined, stateless interface exposed through the /responses endpoint.
All methods are available on the TsgcHTTP_API_OpenAI component.
Unlike Assistants, each Responses API call is a single round-trip that can include tool definitions, file search,
web search, and structured output -- all in one request.
| Method | Description | Endpoint |
|---|---|---|
CreateResponse |
Creates a new model response. Accepts a model identifier and input text (or structured input array). Returns the model's generated output including any tool calls. | POST /responses |
RetrieveResponse |
Retrieves a previously created response by its unique ID. Useful for polling or auditing completed responses. | GET /responses/{response_id} |
DeleteResponse |
Permanently deletes a stored response. Only applicable when store: true was used during creation. |
DELETE /responses/{response_id} |
CancelResponse |
Cancels an in-progress response. Applicable to responses created in background mode. | POST /responses/{response_id}/cancel |
ListInputItems |
Lists the input items associated with a response. Useful for inspecting the conversation context that was sent to the model. | GET /responses/{response_id}/input_items |
var
OpenAI: TsgcHTTP_API_OpenAI;
vResponse: String;
begin
OpenAI := TsgcHTTP_API_OpenAI.Create(nil);
Try
OpenAI.OpenAIOptions.ApiKey := 'sk-your-api-key';
// Create a simple response
vResponse := OpenAI._CreateResponse('gpt-4o', 'Explain quantum computing');
ShowMessage(vResponse);
// Retrieve a previously created response
vResponse := OpenAI._RetrieveResponse('resp_abc123');
ShowMessage(vResponse);
// Delete a stored response
OpenAI._DeleteResponse('resp_abc123');
Finally
OpenAI.Free;
end;
end;
The Audio Speech API provides text-to-speech capabilities using OpenAI's TTS models. It supports two model tiers:
tts-1 for low-latency streaming use cases and
tts-1-hd for higher quality output.
Six built-in voices are available: alloy, echo, fable,
onyx, nova, and shimmer.
The output can be returned in mp3, opus, aac, flac, wav, or pcm format.
| Method | Description | Endpoint |
|---|---|---|
CreateSpeech |
Generates audio speech from the provided text input using the specified model and voice. Returns the audio content as a binary stream. | POST /audio/speech |
var
OpenAI: TsgcHTTP_API_OpenAI;
vAudioStream: TMemoryStream;
begin
OpenAI := TsgcHTTP_API_OpenAI.Create(nil);
Try
OpenAI.OpenAIOptions.ApiKey := 'sk-your-api-key';
// Generate speech using the 'alloy' voice
oStream := TFileStream.Create('stream.mpeg', fmCreate or fmOpenRead);
Try
OpenAI._CreateSpeech('tts-1', 'Hello world', 'alloy', oStream);
Finally
oStream.Free;
End;
// Generate high-definition speech with 'nova' voice
oStream := TFileStream.Create('stream.mpeg', fmCreate or fmOpenRead);
Try
OpenAI._CreateSpeech('tts-1-hd', 'Welcome to sgcWebSockets.', 'nova', oStream);
Finally
oStream.Free;
End;
Finally
OpenAI.Free;
end;
end;
tts-1 model is optimized for
real-time, low-latency applications, while tts-1-hd
provides higher audio fidelity at the cost of slightly increased latency. Choose based on your application requirements.
The Fine-Tuning Jobs API replaces the deprecated /fine-tunes
endpoint with the new /fine_tuning/jobs endpoint.
It provides full lifecycle management for fine-tuning operations: creating jobs, listing active and completed jobs,
retrieving details, cancelling in-progress jobs, and streaming training events. This API supports fine-tuning
of models like gpt-4o-mini-2024-07-18 using your own training data.
| Method | Description | Endpoint |
|---|---|---|
CreateFineTuningJob |
Creates a new fine-tuning job using a specified base model and a previously uploaded training file. Returns the job object with its ID and status. | POST /fine_tuning/jobs |
ListFineTuningJobs |
Lists all fine-tuning jobs for the organization, with support for pagination. Returns jobs sorted by creation date. | GET /fine_tuning/jobs |
RetrieveFineTuningJob |
Retrieves detailed information about a specific fine-tuning job, including status, hyperparameters, and result files. | GET /fine_tuning/jobs/{job_id} |
CancelFineTuningJob |
Cancels an in-progress fine-tuning job. The job status changes to "cancelled" and no further training occurs. | POST /fine_tuning/jobs/{job_id}/cancel |
ListFineTuningJobEvents |
Lists status events for a fine-tuning job, including training loss, validation metrics, and completion status. Supports pagination. | GET /fine_tuning/jobs/{job_id}/events |
var
OpenAI: TsgcHTTP_API_OpenAI;
vResponse: String;
begin
OpenAI := TsgcHTTP_API_OpenAI.Create(nil);
Try
OpenAI.OpenAIOptions.ApiKey := 'sk-your-api-key';
// Create a fine-tuning job
vResponse := OpenAI._CreateFineTuningJob('gpt-4o-mini-2024-07-18', 'file-abc123');
ShowMessage(vResponse);
// List all fine-tuning jobs
vResponse := OpenAI._ListFineTuningJobs;
ShowMessage(vResponse);
// Retrieve a specific job
vResponse := OpenAI._RetrieveFineTuningJob('ftjob-xyz789');
ShowMessage(vResponse);
// List events for a job
vResponse := OpenAI._ListFineTuningJobEvents('ftjob-xyz789');
ShowMessage(vResponse);
// Cancel an in-progress job
OpenAI._CancelFineTuningJob('ftjob-xyz789');
Finally
OpenAI.Free;
end;
end;
The Chat Completions API in sgcWebSockets 2026.4.0 has been modernized with several new request properties and response fields. These additions bring full support for tool/function calling, structured JSON output, deterministic generation via seeds, and parallel tool execution.
| Property | Description | Endpoint |
|---|---|---|
Tools |
Defines a list of tools (functions) the model may call. Each tool includes a name, description, and JSON Schema for its parameters. | POST /chat/completions |
ToolChoice |
Controls how the model selects tools. Options: auto, none, required, or a specific function name. |
POST /chat/completions |
ResponseFormat |
Specifies the output format. Use json_object for guaranteed JSON output or json_schema for structured output conforming to a provided schema. |
POST /chat/completions |
Seed |
An integer seed for deterministic sampling. When the same seed and parameters are used, the model attempts to produce the same output. | POST /chat/completions |
MaxCompletionTokens |
Sets an upper bound on the number of tokens the model can generate in the response. Replaces the older max_tokens parameter. |
POST /chat/completions |
ParallelToolCalls |
When enabled, the model may issue multiple tool calls in a single response, allowing parallel execution on the client side. | POST /chat/completions |
StreamOptions |
Configuration for streaming responses. Includes options like include_usage to receive token usage statistics in the final streamed chunk. |
POST /chat/completions |
| Field | Description | Endpoint |
|---|---|---|
ToolCalls |
Array of tool call objects in the assistant message. Each contains an ID, function name, and arguments for client-side execution. | POST /chat/completions |
Refusal |
Contains the model's refusal message when it declines to fulfill a request due to safety or content policy constraints. | POST /chat/completions |
SystemFingerprint |
A fingerprint representing the backend configuration used to generate the response. Useful for verifying deterministic output when using Seed. |
POST /chat/completions |
var
OpenAI: TsgcHTTP_API_OpenAI;
vResponse: String;
begin
OpenAI := TsgcHTTP_API_OpenAI.Create(nil);
Try
OpenAI.OpenAIOptions.ApiKey := 'sk-your-api-key';
// Configure Chat Completions with new properties
OpenAI.ChatCompletions.Model := 'gpt-4o';
OpenAI.ChatCompletions.MaxCompletionTokens := 1024;
OpenAI.ChatCompletions.Seed := 42;
OpenAI.ChatCompletions.ParallelToolCalls := True;
OpenAI.ChatCompletions.ResponseFormat := 'json_object';
// Add a user message and create the completion
OpenAI.ChatCompletions.AddMessage('user', 'List 3 benefits of Delphi in JSON format.');
vResponse := OpenAI._CreateChatCompletion;
ShowMessage(vResponse);
Finally
OpenAI.Free;
end;
end;
The Batch API allows you to send large groups of API requests for asynchronous processing. This is ideal for
workloads that do not require immediate responses, such as bulk classification, embedding generation, or
large-scale content moderation. Batch requests typically complete within 24 hours and offer a 50% cost
reduction compared to synchronous API calls. All batch methods are available on the
TsgcHTTP_API_OpenAI component through the
/batches endpoint.
| Method | Description | Endpoint |
|---|---|---|
CreateBatch |
Creates a new batch job from a previously uploaded JSONL file containing API requests. Requires the input file ID and target endpoint. | POST /batches |
RetrieveBatch |
Retrieves the current status and details of a batch job, including progress counts and output file references. | GET /batches/{batch_id} |
ListBatches |
Lists all batch jobs for the organization. Supports pagination through after and limit parameters. |
GET /batches |
CancelBatch |
Cancels an in-progress batch job. Already completed requests within the batch are not affected. | POST /batches/{batch_id}/cancel |
var
OpenAI: TsgcHTTP_API_OpenAI;
vResponse: String;
begin
OpenAI := TsgcHTTP_API_OpenAI.Create(nil);
Try
OpenAI.OpenAIOptions.ApiKey := 'sk-your-api-key';
// Create a batch job targeting chat completions
vResponse := OpenAI._CreateBatch('file-abc123', '/v1/chat/completions');
ShowMessage(vResponse);
// Check batch status
vResponse := OpenAI._RetrieveBatch('batch_xyz789');
ShowMessage(vResponse);
// List all batches
vResponse := OpenAI._ListBatches;
ShowMessage(vResponse);
// Cancel a batch if needed
OpenAI._CancelBatch('batch_xyz789');
Finally
OpenAI.Free;
end;
end;
CreateBatch must be a JSONL file
uploaded via the Files API with purpose batch. Each line
in the file represents a single API request with a custom ID, method, URL, and body.
The Uploads API enables uploading large files in multiple parts, which is essential when working with files that
exceed the single-request upload limit (typically 512 MB). The workflow involves creating an upload session,
adding parts sequentially, and then completing the upload to receive a
File object that can be used with
other API endpoints. All methods are available on the
TsgcHTTP_API_OpenAI component through the
/uploads endpoint.
| Method | Description | Endpoint |
|---|---|---|
CreateUpload |
Initiates a new multipart upload session. Requires the filename, purpose, total byte count, and MIME type. Returns an upload object with a unique ID. | POST /uploads |
AddUploadPart |
Adds a chunk of file data to an in-progress upload. Parts must be added sequentially and each part returns a part ID needed for completion. | POST /uploads/{upload_id}/parts |
CompleteUpload |
Completes the multipart upload by providing the ordered list of part IDs. Returns the final File object that can be used with other APIs. |
POST /uploads/{upload_id}/complete |
CancelUpload |
Cancels an in-progress upload session. Any uploaded parts are discarded and the upload ID becomes invalid. | POST /uploads/{upload_id}/cancel |
var
OpenAI: TsgcHTTP_API_OpenAI;
vUploadResponse: String;
vPartResponse: String;
vCompleteResponse: String;
begin
OpenAI := TsgcHTTP_API_OpenAI.Create(nil);
Try
OpenAI.OpenAIOptions.ApiKey := 'sk-your-api-key';
// Step 1: Create an upload session
vUploadResponse := OpenAI._CreateUpload(
'training_data.jsonl', 'fine-tune', 104857600, 'application/jsonl');
ShowMessage(vUploadResponse);
// Step 2: Add file parts
vPartResponse := OpenAI._AddUploadPart('upload_abc123', 'C:\data\part1.jsonl');
ShowMessage(vPartResponse);
vPartResponse := OpenAI._AddUploadPart('upload_abc123', 'C:\data\part2.jsonl');
ShowMessage(vPartResponse);
// Step 3: Complete the upload
vCompleteResponse := OpenAI._CompleteUpload('upload_abc123',
'["part_def456", "part_ghi789"]');
ShowMessage(vCompleteResponse);
Finally
OpenAI.Free;
end;
end;
When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.