POST
/
api
/
ai
/
content
/
v4
/
{modelType}
/
{modelId}
curl --request POST \
  --url https://api.worqhat.com/api/ai/content/v4/{modelType}/{modelId} \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "training_data": "You will be provided with customer service queries. Classify each query into a  primary and secondary category. Provide JSON output with keys: `primary` and `secondary`.\n",
  "response_type": "json",
  "question": "How do I reset my password?",
  "conversation_id": "conv_1724236791746",
  "preserve_history": false,
  "conversation_history": [
    {
      "user_prompt": "I need help with my bill.",
      "model_response": "Sure! Can you clarify what specific issue you'\''re facing?"
    }
  ]
}'
{
  "Text Output": {
    "value": {
      "content": "Hi there!  My name is Alex, and I'm happy to help you with anything you need for your tour.  What can I do for you today?",
      "processingTime": 3105.397454,
      "processingId": "8aa97481-20f9-48f4-a12d-d02eb6c1d62a",
      "processingCount": 84,
      "conversation_id": "conv_1724236791746",
      "model": "aicon-v4-alpha-160824"
    }
  },
  "JSON Structured Output": {
    "value": {
      "content": {
        "response": "Hi there!  My name is Alex, and I'm delighted to assist you with your travel needs. What can I help you with today?"
      },
      "processingTime": 2946.176112,
      "processingId": "5944ab45-8bd7-4151-853d-da6cf315d617",
      "processingCount": 90,
      "conversation_id": "conv_1724237541817",
      "model": "aicon-v4-alpha-160824"
    }
  },
  "Streaming Data": {
    "value": {
      "data": [
        {
          "content": "Hi",
          "model": "aicon-v4-alpha-160824",
          "timestamp": 1724240278523,
          "processing_id": "bc4cc3b9-d000-4866-83ad-ad71abf10c8f",
          "conversation_id": "conv_1724240276193"
        },
        {
          "content": " there! My name is Alex, and I'm excited to be your tour",
          "model": "aicon-v4-alpha-160824",
          "timestamp": 1724240278630,
          "processing_id": "bc4cc3b9-d000-4866-83ad-ad71abf10c8f",
          "conversation_id": "conv_1724240276193"
        },
        {
          "content": " guide today. What can I help you with? \n",
          "model": "aicon-v4-alpha-160824",
          "timestamp": 1724240278708,
          "processing_id": "bc4cc3b9-d000-4866-83ad-ad71abf10c8f",
          "conversation_id": "conv_1724240276193"
        },
        {
          "content": "",
          "model": "aicon-v4-alpha-160824",
          "timestamp": 1724240278712,
          "processing_id": "bc4cc3b9-d000-4866-83ad-ad71abf10c8f",
          "conversation_id": "conv_1724240276193"
        },
        {
          "content": "",
          "model": "aicon-v4-alpha-160824",
          "finishReason": "stop",
          "timestamp": 1724240278716,
          "processing_id": "bc4cc3b9-d000-4866-83ad-ad71abf10c8f",
          "conversation_id": "conv_1724240276193",
          "wordCount": 77
        }
      ]
    }
  }
}

Generate from Multimodal Input

WorqHat AiCon V4 Custom Model API allows you to generate text using multimodal inputs. You can choose from several base models, each with its own strengths:

  • aicon-v4-nano-160824 (finetuned included): Ideal for quick and efficient text generation, especially when resources are limited.
  • aicon-v4-large-160824 (finetuned included): Our smartest and highest accuracy model, suitable for a wide range of text generation tasks, complex thoughts and reasoning along with mathematical solutions.

Choose Your Response Style: You have the option to either stream the response (stream_data = true) or wait for the entire result to be generated (stream_data = false).

Benefit: Faster interactions. You receive partial results as they are generated, allowing you to start working with the text immediately.


Example:

    var files = document.querySelector('input[type="file"]').files;
var myHeaders = new Headers();
myHeaders.append('Authorization', 'Bearer sk-02e44************');

var formdata = new FormData();
formdata.append('question', 'what are the images about');
formdata.append(
  'training_data',
  'You are alex and you are one of the best Tour Guides. answer everything while starting with your name'
);

for (const file of files) {
  formdata.append('files', file);
}

formdata.append('stream_data', 'true');

var requestOptions = {
  method: 'POST',
  headers: myHeaders,
  body: formdata,
  redirect: 'follow',
};

fetch('https://api.worqhat.com/api/ai/content/v4/{modelType}/{modelId}', requestOptions)
  .then(response => response.text())
  .then(result => {
      console.log(result);
      alert('Response received: ' + result);
  })
  .catch(error => {
      console.log('error', error);
      alert('Error: ' + error);
  });

Key Points:

  • Replace YOUR_API_KEY with your actual WorqHat AiCon V4 API key.
  • Adjust the model parameter to select the desired model.
  • Modify the prompt to provide your text input. Try out other Capabilities of the WorqHat API
  • Build Multiturn Conversations or Chat Applications by having the model maintain history by default
  • Fetch real world information and data using the Alpha Models

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

modelType
enum<string>
default:nano
required

The type of the base model that you have used to train the data.

Available options:
nano,
large
modelId
string
required

The unique ID of the custom model that has been created.

Body

question
string
required

The message or question you want to pass to the model.

model
string
required

ID of the model to use. See the model endpoint compatibility table for details.

randomness
number
required

What model prediction randomness to use, between 0 and 1. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more deterministic.

Required range: 0 <= x <= 1
stream_data
boolean
required

If set to true, partial message deltas will be sent to reduce waiting time. Tokens will be streamed as data-only server-sent events, terminating with [DONE].

training_data
string
required

Allows passing system messages or training data to influence model responses. Supports up to 750K fixed context window without limitations on input.

response_type
string
required

Specifies the output format. Setting response_type: json ensures the model adheres to JSON format. Without proper instruction in training_data, the model may generate an unending whitespace stream.

preserve_history
boolean
required

Enables manually maintaining conversation history instead of using conversation_id.

conversation_history
object[]
required

Manually maintain past conversation records.

conversation_id
string

A unique identifier for each conversation. If omitted, the model generates one. Recommended to use the conversation ID returned in the first response.

Response

200
application/json
Text Generated Successfully
content
string

The generated response text.

processing_count
integer

Usage count for billing.

processing_time
number

Time taken (in milliseconds) to process the request.

processing_id
string

Unique identifier for the process (for tracking support requests).

conversation_id
string

The conversation ID for multi-turn chat applications.

model
string

The model used for processing.