TsgcWSAPIClient_MCPEvents › OnMCPSamplingCreateMessage

OnMCPSamplingCreateMessage Event

Fires when the server asks the client to sample an LLM (sampling/createMessage).

Syntax

property OnMCPSamplingCreateMessage: TsgcAI_MCP_Client_OnSamplingCreateMessageEvent;
// TsgcAI_MCP_Client_OnSamplingCreateMessageEvent = procedure(Sender: TObject; const aRequest: TsgcAI_MCP_Request_SamplingCreateMessage; const aResponse: TsgcAI_MCP_Response_SamplingCreateMessage) of object

Default Value

Remarks

Server-initiated event: the server wants the host to run an LLM completion on its behalf. Read aRequest.Params.Messages, aRequest.Params.ModelPreferences, aRequest.Params.SystemPrompt and aRequest.Params.MaxTokens, ideally show the request to the user for approval, invoke your preferred LLM (OpenAI, Anthropic, Gemini, local model, etc.), then fill aResponse.Result with the selected model, stop reason, role, and content. The component forwards the populated response to the server when the handler returns.

Example

procedure TMainForm.MCPSamplingCreateMessage(Sender: TObject;
  const aRequest: TsgcAI_MCP_Request_SamplingCreateMessage;
  const aResponse: TsgcAI_MCP_Response_SamplingCreateMessage);
var
  vReply: string;
begin
  vReply := MyLLMClient.Complete(aRequest.Params.Messages,
    aRequest.Params.MaxTokens);
  aResponse.Result.Model := 'claude-opus-4-7';
  aResponse.Result.Role := 'assistant';
  aResponse.Result.StopReason := 'endTurn';
  aResponse.Result.Content.&Type := 'text';
  aResponse.Result.Content.Text := vReply;
end;

Back to Events