Multimodal Input
POST
/api/ai/content/v4- Set up a new or existing WorqHat Workspace: When you start, we'll add a $5 credit to your account, valid for the next 14 days, to help you get started with your experimentation.
- Create a New API Key: Head to the AI Studio Page and generate a new API key. Remember to whitelist your Localhost or production domain for security purposes.
- Add the API Key Reference: Include your API key in your application file. For detailed instructions and guidance, visit the [Quickstart](apidog://link/pages/605330) page.
Generate from Multimodal Inputs
WorqHat AiCon V4 API allows you to generate text using only text input. You can choose from a variety of models, each with its own strengths:
aion-v4-nano-160824
(finetuned included): Ideal for quick and efficient text generation, especially when resources are limited.aion-v4-large-160824
(finetuned included): Our smartest and highest accuracy model, suitable for a wide range of text generation tasks, complex though and reasoning along with mathematical solutions.aion-v4-alpha-160824
: A better version of theaion-v4-nano-160824
model which is trained on live data, providing up-to-date insights and knowledge. It's ideal for tasks that require real-time information, such as news analysis, market research, or staying informed on current events
Choose Your Response Style:
You have the option to either stream the response (stream_data = true
) or wait for the entire result to be generated (stream_data = false
).
Example:
var files = document.querySelector('input[type="file"]').files;
var myHeaders = new Headers();
myHeaders.append('Authorization', 'Bearer sk-02e44************');
var formdata = new FormData();
formdata.append('question', 'what are the images about');
formdata.append(
'training_data',
'You are alex and you are one of the best Tour Guides. answer everything while starting with your name'
);
for (const file of files) {
formdata.append('files', file);
}
formdata.append('model', 'aicon-v4-nano-160824');
formdata.append('stream_data', 'true');
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: formdata,
redirect: 'follow',
};
fetch('https://api.worqhat.com/api/ai/content/v4', requestOptions)
.then(response => response.text())
.then(result => {
console.log(result);
alert('Response received: ' + result);
})
.catch(error => {
console.log('error', error);
alert('Error: ' + error);
});
});
Example:
var files = document.querySelector('input[type="file"]').files;
var myHeaders = new Headers();
myHeaders.append('Authorization', 'Bearer sk-02e44************');
var formdata = new FormData();
formdata.append('question', 'what are the images about');
formdata.append(
'training_data',
'You are alex and you are one of the best Tour Guides. answer everything while starting with your name'
);
for (const file of files) {
formdata.append('files', file);
}
formdata.append('model', 'aicon-v4-nano-160824');
formdata.append('stream_data', 'false');
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: formdata,
redirect: 'follow',
};
fetch('https://api.worqhat.com/api/ai/content/v4', requestOptions)
.then(response => response.text())
.then(result => {
console.log(result);
alert('Response received: ' + result);
})
.catch(error => {
console.log('error', error);
alert('Error: ' + error);
});
});
</Tab>
Key Points:
- Replace
YOUR_API_KEY
with your actual WorqHat AiCon V4 API key. - Adjust the
model
parameter to select the desired model. - Modify the
prompt
to provide your text input.
Try out other Capabilities of the WorqHat API
- Build Multiturn Conversations or Chat Applications by having the model maintain history by default Text Only Conversational
- Fetch real world information and data using the Alpha Models Alpha Models (Text Only)
Request
The message or question you want to pass to the model.
ID of the model to use. See the model endpoint compatibility table for details on which models work and are currently supported.
Files that you want to upload. You can send Images, Videos, PDFs, and Audio files. It has to be sent as an array.
Having a 750K fixed context window and with no limitations on the input, you can pass any sort of training data ir system messages that you want the model to follow when answering to your questions.
Example:
You will be provided with customer service queries. Classify each query into a primary category and a secondary category. Provide your output in json format with the keys: primary and secondary.
Primary categories: Billing, Technical Support, Account Management, or General Inquiry.
Billing secondary categories:
- Unsubscribe or upgrade
- Add a payment method
- Explanation for charge
- Dispute a charge
Technical Support secondary categories:
- Troubleshooting
- Device compatibility
- Software updates
Account Management secondary categories:
- Password reset
- Update personal information
- Close account
- Account security
General Inquiry secondary categories:
- Product information
- Pricing
- Feedback
- Speak to a human
If set, partial message deltas will be sent, to reduce the waiting time. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
An object specifying the format that the model must output. Compatible with all AiCon V4 models.Setting to { "response_type": "json" } will enable the model to send back structured outputs which ensures the model will match your supplied JSON schema in the message. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via the training_data or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="DONE", which indicates the generation exceeded maximum tokens or the conversation exceeded the max context length and you will be billed for the whitespace.
Request samples
Responses
The response output of the processing.
The amount of time in miliseconds it took to complete the request.
A unique identifier for the server process. This helps us track support requests and complaints.
Usage statistics for the request. This is what is used for the billing.
A unique identifier for the Message. You can use this conversation id later when building multiturn chat applications. This conversation id is what keeps a track of your past conversation. You can use this to keep continuing your requests one after the other without the hassle of maintaining the conversation history on your own.
The model used for the process.
{
"content": "Hi there! My name is Alex, and I'm happy to help you with anything you need for your tour. What can I do for you today? \n",
"processingTime": 3105.397454,
"processingId": "8aa97481-20f9-48f4-a12d-d02eb6c1d62a",
"processingCount": 84,
"conversation_id": "conv_1724236791746",
"model": "aicon-v4-nano-160824"
}