Text Only Conversational
WorqHat AiCon V4 API allows you to generate text using only text input.
Generate from Text-Only Input
WorqHat AiCon V4 API allows you to generate text using only text input. You can choose from a variety of models, each with its own strengths:
aicon-v4-nano-160824
(finetuned included): Ideal for quick and efficient text generation, especially when resources are limited.aicon-v4-large-160824
(finetuned included): Our smartest and highest accuracy model, suitable for a wide range of text generation tasks, complex thoughts and reasoning along with mathematical solutions.aicon-v4-alpha-160824
: A better version of theaicon-v4-nano-160824
model which is trained on live data, providing up-to-date insights and knowledge. It’s ideal for tasks that require real-time information, such as news analysis, market research, or staying informed on current events
Choose Your Response Style:
You have the option to either stream the response (stream_data = true
) or wait for the entire result to be generated (stream_data = false
).
Benefit: Faster interactions. You receive partial results as they are generated, allowing you to start working with the text immediately.
Example:
Key Points:
- Replace
YOUR_API_KEY
with your actual WorqHat AiCon V4 API key. - Adjust the
model
parameter to select the desired model. - Modify the
prompt
to provide your text input.
Try out other Capabilities of the WorqHat API
- Build Multiturn COnversations or Chat Applications by having the model maintain history by default
- Generate text or code outputs from multimodal inputs (including text, images, PDFs, video, and audio)
- Fetch real world information and data using the Alpha Models
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
The message or question you want to pass to the model.
ID of the model to use. See the model endpoint compatibility table for details on which models work and are currently supported.
What model prediction randomness to use, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
If set, partial message deltas will be sent as data-only server-sent events, with the stream terminated by a data [DONE] message.
You can pass any sort of training data or system messages that you want the model to follow when answering your questions.
Specifies the format that the model must output.
Used as per requirement.
Set to true to maintain conversation history.
Response
The response output of the processing.
The amount of time in miliseconds it took to complete the request.
A unique identifier for the server process. This helps us track support requests and complaints.
Usage statistics for the request. This is what is used for the billing.
A unique identifier for the Message. You can use this conversation id later when building multiturn chat applications. This conversation id is what keeps a track of your past conversation. You can use this to keep continuing your requests one after the other without the hassle of maintaining the conversation history on your own.
The model used for the process.