OpenAI Delphi API (1 / 5)

From sgcWebSockets 2023.3.0 the OpenAI API is fully supported.

The OpenAI API can be applied to virtually any task that involves understanding or generating natural language, code, or images. OpenAI offer a spectrum of models with different levels of power suitable for different tasks, as well as the ability to fine-tune your own custom models. These models can be used for everything from content generation to semantic search and classification.

Authentication

The OpenAI API uses API keys for authentication. Visit your API Keys page to retrieve the API key you'll use in your requests.

Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.

This API Key must be configured in the OpenAIOptions.ApiKey property of the component. Optionally, for users who belong to multiple organizations, you can set your Organization in the property OpenAIOptions.Organization if your account belongs to an organization.


OpenAI Models

Once the API Key is configured, find below a list of available functions to interactuate with the OpenAI API.

Models

List and describe the various models available in the API.

  • GetModels: Lists the currently available models, and provides basic information about each one such as the owner and availability.
  • GetModel: Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
    • Model: The ID of the model to use for this request

Completions

Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.

  • CreateCompletion: Creates a completion for the provided prompt and parameters
    • Model: ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
    • Prompt: The prompt to generate completions.

Chat

Given a chat conversation, the model will return a chat completion response.

  • Model: ID of the model to use. Call GetModels to get a list of all models supported by the Chat API.
  • Message: The message to generate chat completions for.
  • Role: by default user, other options are: system, assistant.

Edits

Given a prompt and an instruction, the model will return an edited version of the prompt.

  • CreateEdit: Creates a new edit for the provided input, instruction, and parameters.
    • Model: ID of the model to use. You can use the text-davinci-edit-001 or code-davinci-edit-001 model with this endpoint.
    • Instruction: The instruction that tells the model how to edit the prompt.
    • Input: (optional) The input text to use as a starting point for the edit.
Images

Given a prompt and/or an input image, the model will generate a new image.

  • CreateImage: Creates an image given a prompt.
    • Prompt: A text description of the desired image(s). The maximum length is 1000 characters.
  • CreateImageEdit: Creates an edited or extended image given an original image and a prompt.
    • Image: The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
    • Prompt: A text description of the desired image(s). The maximum length is 1000 characters.
  • CreateImageVariations: Creates a variation of a given image.
    • Image: The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.

Embeddings

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

  • CreateEmbeddings: Creates an embedding vector representing the input text.
    • Model: ID of the model to use.
    • Input: Input text to get embeddings for.

Audio

Turn Audio into Text.

  • CreateTranscriptionFromFile: Transcribes audio into the input language from a filename
    • Model: ID of the model to use. Only whisper-1 is currently available.
    • Filename: The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
  • CreateTranscription: Records audio for X seconds and transcribes it.
    • Model: ID of the model to use. Only whisper-1 is currently available.
    • Time: time in milliseconds, by default 10 seconds.
  • CreateTranslationFromFile: Translates audio into into English.
    • Model: ID of the model to use. Only whisper-1 is currently available.
    • Filename: The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
  • CreateTranslation: Records audio for X seocnds and translates it.
    • Model: ID of the model to use. Only whisper-1 is currently available.
    • Time: time in milliseconds, by default 10 seconds.

Files

Files are used to upload documents that can be used with features like Fine-tuning.

  • ListFiles: Returns a list of files that belong to the user's organization.
  • UploadFile: Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB.
    • Filename: Name of the JSON Lines file to be uploaded. If the purpose is set to "fine-tune", each line is a JSON record with "prompt" and "completion" fields representing your training examples.
    • Purpose: The intended purpose of the uploaded documents. Use "fine-tune" for Fine-tuning.
  • DeleteFile: Delete a file.
    • FileId: The ID of the file to use for this request
  • RetrieveFile: Returns information about a specific file.
    • FileId: The ID of the file to use for this request
  • RetrieveFileContent: Returns the contents of the specified file
    • FileId: The ID of the file to use for this request.

Fine-Tunes

Manage fine-tuning jobs to tailor a model to your specific training data.

  • CreateFineTune: Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
    • TrainingFile: The ID of an uploaded file that contains training data.
  • ListFineTunes: List your organization's fine-tuning jobs
  • RetrieveFineTune: Gets info about the fine-tune job.
    • FineTuneId: The ID of the fine-tune job
  • CancelFineTune: Immediately cancel a fine-tune job.
    • FineTuneId: The ID of the fine-tune job
  • ListFineTuneEvents: Get fine-grained status updates for a fine-tune job.
    • FineTuneId: The ID of the fine-tune job
  • DeleteFineTuneModel: Delete a fine-tuned model. You must have the Owner role in your organization.
    • Model: The model to delete.

Moderations

Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

  • CreateModeration: Classifies if text violates OpenAI's Content Policy
    • Input: The input text to classify

OpenAI Examples

Find below some examples of applications build in Delphi using the OpenAI API

1. ChatGPT Delphi Client

2. OpenAI Transcription Delphi Client

3. Translate OpenAI Delphi Client

4. Image Generator OpenAI Delphi Client


Find below a sample OpenAI API client built for Windows using the Delphi sgcWebSockets library which shows the main methods of the API.

sgcOpenAI
2.8 mb
×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

sgcWebSockets 2023.3
sgcWebSockets 2023.2

Related Posts