TsgcHTTP_API_OpenAI › Methods › CreateModeration
Classifies text against OpenAI's content policy to detect potentially harmful or unsafe content
function CreateModeration(const aRequest : TsgcOpenAIClass_Request_Moderation) : TsgcOpenAIClass_Response_Moderation;
| Name | Type | Description |
|---|---|---|
aRequest | const TsgcOpenAIClass_Request_Moderation | Moderation request specifying the model and the input text(s) to evaluate |
Moderation response with per-category flags, scores and an overall flagged indicator (TsgcOpenAIClass_Response_Moderation)
Calls the POST /v1/moderations endpoint. The moderation model assesses categories such as hate, harassment, self-harm, sexual content and violence, returning both boolean flags and numeric scores for each. Typical models are omni-moderation-latest and text-moderation-latest. Use this method to screen user-generated content before forwarding it to completion or chat endpoints.
oRequest := TsgcOpenAIClass_Request_Moderation.Create;
oRequest.Model := 'omni-moderation-latest';
oRequest.Input.Add('I want to learn Delphi programming');
oResponse := oAPI.CreateModeration(oRequest);
ShowMessage(BoolToStr(oResponse.Results[0].Flagged, True));