Finetuned Models (Text Only)
WorqHat AiCon V4 Custom Model API allows you to generate text using only text input.
Generate from Text-Only Input
WorqHat AiCon V4 Custom Model API allows you to generate text using only text input. You can choose from a variety of base models, each with its own strengths:
aicon-v4-nano-160824
(finetuned included): Ideal for quick and efficient text generation, especially when resources are limited.aicon-v4-large-160824
(finetuned included): Our smartest and highest accuracy model, suitable for a wide range of text generation tasks, complex thoughts and reasoning along with mathematical solutions.
Choose Your Response Style:
You have the option to either stream the response (stream_data = true
) or wait for the entire result to be generated (stream_data = false
).
Benefit: Faster interactions. You receive partial results as they are generated, allowing you to start working with the text immediately.
Example:
Key Points:
- Replace
YOUR_API_KEY
with your actual WorqHat AiCon V4 API key. - Adjust the
model
parameter to select the desired model. - Modify the
prompt
to provide your text input.
Try out other Capabilities of the WorqHat API
- Build Multiturn Conversations or Chat Applications by having the model maintain history by default
- Generate text or code outputs from multimodal inputs (including text, images, PDFs, video, and audio)
- Fetch real world information and data using the Alpha Models(Text Only)
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Path Parameters
The type of the base model that you have used to train the data.
nano
, large
The unique ID of the custom model that has been created.
Body
The message or question you want to pass to the model.
ID of the model to use. See the model endpoint compatibility table for details.
What model prediction randomness to use, between 0 and 1. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more deterministic.
0 <= x <= 1
If set to true
, partial message deltas will be sent to reduce waiting time. Tokens will be streamed as data-only server-sent events, terminating with [DONE]
.
Allows passing system messages or training data to influence model responses. Supports up to 750K fixed context window without limitations on input.
Specifies the output format. Setting response_type: json
ensures the model adheres to JSON format. Without proper instruction in training_data
, the model may generate an unending whitespace stream.
Enables manually maintaining conversation history instead of using conversation_id
.
Manually maintain past conversation records.
A unique identifier for each conversation. If omitted, the model generates one. Recommended to use the conversation ID returned in the first response.
Response
The generated response text.
Usage count for billing.
Time taken (in milliseconds) to process the request.
Unique identifier for the process (for tracking support requests).
The conversation ID for multi-turn chat applications.
The model used for processing.