Text Only Conversational
POST
/api/ai/content/v4- Set up a new or existing WorqHat Workspace: When you start, we'll add a $5 credit to your account, valid for the next 14 days, to help you get started with your experimentation.
- Create a New API Key: Head to the AI Studio Page and generate a new API key. Remember to whitelist your Localhost or production domain for security purposes.
- Add the API Key Reference: Include your API key in your application file. For detailed instructions and guidance, visit the [Quickstart](apidog://link/pages/605330) page.
Generate from Text-Only Input
WorqHat AiCon V4 API allows you to generate text using only text input. You can choose from a variety of models, each with its own strengths:
aicon-v4-nano-160824
(finetuned included): Ideal for quick and efficient text generation, especially when resources are limited.aicon-v4-large-160824
(finetuned included): Our smartest and highest accuracy model, suitable for a wide range of text generation tasks, complex thoughts and reasoning along with mathematical solutions.aicon-v4-alpha-160824
: A better version of theaion-v4-nano-160824
model which is trained on live data, providing up-to-date insights and knowledge. It's ideal for tasks that require real-time information, such as news analysis, market research, or staying informed on current events
Choose Your Response Style:
You have the option to either stream the response (stream_data = true
) or wait for the entire result to be generated (stream_data = false
).
Example:
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
myHeaders.append("Authorization", "Bearer sk-02e44d********");
var raw = JSON.stringify({
"question": "Your input prompt",
"model": "aicon-v4-nano-160824",
"randomness": 0.5,
"stream_data": true,
"training_data": "Add your training data or system messages",
"response_type": "text",
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://api.worqhat.com/api/ai/content/v4", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
Example:
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
myHeaders.append("Authorization", "Bearer sk-02e44d********");
var raw = JSON.stringify({
"question": "Your input prompt",
"model": "aicon-v4-nano-160824",
"randomness": 0.5,
"stream_data": false,
"training_data": "Add your training data or system messages",
"response_type": "text",
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://api.worqhat.com/api/ai/content/v4", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
</Tab>
Key Points:
- Replace
YOUR_API_KEY
with your actual WorqHat AiCon V4 API key. - Adjust the
model
parameter to select the desired model. - Modify the
prompt
to provide your text input.
Try out other Capabilities of the WorqHat API
- Build Multiturn COnversations or Chat Applications by having the model maintain history by default
- Generate text or code outputs from mutimodal inputs (including text, images, PDFs, video, and audio)
- Fetch real world information and data using the Alpha Models
Request
The message or question you want to pass to the model.
ID of the model to use. See the model endpoint compatibility table for details on which models work and are currently supported.
What model prediction randomness to use, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
This is a system combination of the values used internally and is balanced between nucleus sampling and temperature sampling.
If set, partial message deltas will be sent, to reduce the waiting time. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
Having a 750K fixed context window and with no limitations on the input, you can pass any sort of training data ir system messages that you want the model to follow when answering to your questions.
Example:
You will be provided with customer service queries. Classify each query into a primary category and a secondary category. Provide your output in json format with the keys: primary and secondary.
Primary categories: Billing, Technical Support, Account Management, or General Inquiry.
Billing secondary categories:
- Unsubscribe or upgrade
- Add a payment method
- Explanation for charge
- Dispute a charge
Technical Support secondary categories:
- Troubleshooting
- Device compatibility
- Software updates
Account Management secondary categories:
- Password reset
- Update personal information
- Close account
- Account security
General Inquiry secondary categories:
- Product information
- Pricing
- Feedback
- Speak to a human
An object specifying the format that the model must output. Compatible with all AiCon V4 models.
Setting to { "response_type": "json" } will enable the model to send back structured outputs which ensures the model will match your supplied JSON schema in the message.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via the training_data or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="DONE", which indicates the generation exceeded maximum tokens or the conversation exceeded the max context length and you will be billed for the whitespace.
Every interaction with the Language Model is associated with a Conversation Id. To let the model maintain a history of your conversations by default, you can pass the conversation id. You can define a conversation id on your own or you can just use the conversation id that is returned as a response in the first conversation itself. Its preffered that you let the model generate the conversational id on it own and then use it further to maintain a conversation
This is used in case you want to set a Conversation history on your own. In this scenario, you don't need to pass the Conversation Id, you just need to set preserve_history
to true
and pass the conversation as an array of objects. In the objects the Key is the User's Question or Prompt and the Value is the Model Answer
Keep a track of the past conversations manually.
Model Output 1
Model Output 2
{
"question": "hiii there",
"model": "aicon-v4-nano-160824",
"randomness": 0.5,
"stream_data": false,
"training_data": "You are alex and you are one of the best Tour Guides. answer everything while starting with your name",
"response_type": "text",
"conversation_id": "conv_1724236791746",
"preserve_history": true,
"conversation_history": [
{
"What is the capital of India?": "New Delhi"
},
{
"What is the capital of USA?": "Washington DC"
}
]
}
Request samples
Responses
The response output of the processing.
The amount of time in miliseconds it took to complete the request.
A unique identifier for the server process. This helps us track support requests and complaints.
Usage statistics for the request. This is what is used for the billing.
A unique identifier for the Message. You can use this conversation id later when building multiturn chat applications. This conversation id is what keeps a track of your past conversation. You can use this to keep continuing your requests one after the other without the hassle of maintaining the conversation history on your own.
The model used for the process.
{
"content": "Hi there! My name is Alex, and I'm happy to help you with anything you need for your tour. What can I do for you today? \n",
"processingTime": 3105.397454,
"processingId": "8aa97481-20f9-48f4-a12d-d02eb6c1d62a",
"processingCount": 84,
"conversation_id": "conv_1724236791746",
"model": "aicon-v4-nano-160824"
}