Alpha Model (Text Only)
POST
/api/ai/content/v4- Set up a new or existing WorqHat Workspace: When you start, we'll add a $5 credit to your account, valid for the next 14 days, to help you get started with your experimentation.
- Create a New API Key: Head to the AI Studio Page and generate a new API key. Remember to whitelist your Localhost or production domain for security purposes.
- Add the API Key Reference: Include your API key in your application file. For detailed instructions and guidance, visit the [Quickstart](apidog://link/pages/605330) page.
What is the Alpha Model
Here's a closer look at how AiCon Alpha operates:
Foundation on AiCon v3 Models: AiCon Alpha extends the capabilities of our AiCon v3 models, leveraging their advanced architecture to deliver enhanced performance and understanding.
In-house Search Technology: We've developed a proprietary search, indexing, and crawling infrastructure that allows AiCon Alpha to access the most relevant and current information. Our search index is expansive, regularly updated, and employs advanced ranking algorithms to highlight high-quality, authentic content. This ensures that AiCon Alpha can offer responses enriched with the latest data, thanks to website excerpts or "snippets" drawn directly from our search results.
Fine-tuning for Real-time Relevance: The AiCon Alpha models undergo rigorous fine-tuning, utilizing carefully selected, high-quality training sets prepared by our in-house data specialists. This process is aimed at optimizing the models' performance across various metrics, including accuracy, relevance, and timeliness. Regular updates and fine-tuning sessions ensure that AiCon Alpha remains at the forefront of AI-driven, internet-aware response generation.
By integrating these elements, AiCon Alpha stands out as a dynamic, internet-aware model capable of addressing queries with up-to-the-minute accuracy, such as recent event outcomes or the latest developments in a given field. This positions AiCon Alpha as a cutting-edge tool for accessing and leveraging internet-based knowledge.
Fine-tuning is a critical aspect of our alpha models, enabling them to effectively utilize snippets to inform their responses. Through the meticulous curation of high-quality, diverse, and extensive training sets by our in-house data contractors, we aim to achieve superior performance across various metrics such as helpfulness, factuality, and freshness. Our commitment to excellence involves regular fine-tuning of our models to ensure continuous enhancement of their performance.
Use Cases
Our AiCon Alpha models are versatile and can be applied to a wide range of use cases, including but not limited to:
- Recent Events and Up-to-Date Information:
Question: What was the kind of students appeared for JEE in 2024?
Answer: In 2024, a total of 12,25,529 students appeared for the JEE Main exam. This number represents the students who participated in the NTA JEE Mains 2024 examination. Additionally, the total number of registered candidates for JEE Main 2024 was 12,31,874 students. The exam dates for the January session of JEE Main 2024 were held on January 27, 29, 30, 31, and February 1. However, the authorities revised the exam dates for the second session of JEE Main 2024, which will take place from April 4 to 15.
Furthermore, the National Testing Agency (NTA) recorded an attendance percentage of 95.8% out of the total number of registered candidates for the B.Tech paper in JEE Main 2024. The exam was conducted in multiple languages to accommodate candidates across different regions, including English, Hindi, Assamese, Bengali, Gujarati, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu, and Urdu.
Moreover, the results of the JEE Main 2024 examination revealed that a total of 11,670,036 candidates appeared for the paper 1 and paper 2 examinations. Female representation in the exam slightly improved, accounting for over 33% of the total candidates for JEE Main 2024, an increase from 30.8% in the previous year. The exam also saw a significant number of registrations from candidates identifying as the third gender.
It is important to note that the JEE Main exam results are normalized across multi-session papers based on the relative performance of all candidates who appeared for the examination in one session.
Detailed Inquiries from Webpages:
Question: https://blog.worqhat.com/building-dreams-going-past-horizons-b1203cfb539b how do I apply for the fund?
Answer: To apply for the WorqHat Startup Fund mentioned in the blog post "Building Dreams — Going Past Horizons," you can follow these steps:
- Visit the WorqHat website at https://worqhat.com/startups.
- On the webpage, look for information related to the WorqHat Startup Fund application process or a specific section dedicated to startups seeking funding.
- Fill out any application forms or provide the required information as per the guidelines provided on the website.
- If there is an email address mentioned in the blog post, startups@worqhat.com, you can reach out directly to inquire about the application process and express your interest in applying for the fund.
- Make sure to highlight your startup's innovative solutions in sectors such as healthcare, legal tech, education, energy, infrastructure, or scientific research, as mentioned in the blog post.
- Showcase how your startup aligns with the vision of leveraging AI technology for global betterment and how you plan to utilize the funding support from WorqHat.
- Provide any additional details or documentation requested during the application process to strengthen your application.
By following these steps and reaching out to WorqHat through the provided channels, you can begin the application process for the WorqHat Startup Fund and potentially access the support and resources offered to early-stage startups.
- Summarizing & QnA on YouTube Videos:
Question: https://www.youtube.com/watch?v=rXD0Xiyc0kM can you summarize this video for me
Answer: In the video titled "Best Framework for Web Development - The Bitter Truth," the author discusses various technologies and frameworks used in web development. The key points covered in the video are:
- Importance of choosing the simplest technology for your project based on its complexity.
- Thumb rule: Consider if your website is a product or if you need more control over its features.
- Caution when using NPM packages due to potential issues like bad code or deprecation.
- Recommendations on when to use PHP, Python, React, BAAS, WordPress, Angular/Vue.js, ASP.net, Django Rest Framework, and Ruby on Rails.
- Emphasis on user experience, backend control, and data intensity when selecting technologies.
- Suggestions on using React with Next.js for better interactivity and fast page loads.
- Advantages of using backend as a service (BAAS) like AppWrite, Strapi, and Sanity.io for simplified backend management.
- Recommendations on using WordPress for blogging and content-focused websites.
- Insights on Vue.js, Angular, ASP.NET, Django REST framework, and Ruby on Rails for specific project requirements.
Overall, the video provides a comprehensive overview of different technologies and frameworks, guiding viewers on selecting the most suitable options based on their project needs and goals.
- It can use your video as a context and fetches data from other external sources:
Question: https://www.youtube.com/watch?v=rXD0Xiyc0kM who is the author and what is his background?
Answer: The author of the video "Best Framework for Web Development - The Bitter Truth" is CodeWithHarry. CodeWithHarry is a popular YouTube channel run by Haris Khan, who is known for providing tutorials and courses on programming languages such as Python, C, C++, Java, JavaScript, and more. Haris Khan has a strong background in software development and education, with a focus on simplifying complex programming concepts for beginners and experienced developers alike. He covers a wide range of topics related to web development, software engineering, and programming languages, aiming to help his audience improve their skills and understanding in the field of technology. Haris Khan's content is well-received for its clarity, practical examples, and comprehensive coverage of various programming topics, making him a trusted source for many aspiring developers and tech enthusiasts.
:::warning
This API is currently in active development and is subject to change at any time. It is suggested to have a fallback plan in place in case of any issues or errors with the API.
:::
Generate from Text-Only Input
WorqHat AiCon V4 API allows you to generate text using only text input. You can choose from a variety of models, each with its own strengths:
aicon-v4-alpha-160824
: A better version of theaion-v4-nano-160824
model which is trained on live data, providing up-to-date insights and knowledge. It's ideal for tasks that require real-time information, such as news analysis, market research, or staying informed on current events
Choose Your Response Style:
You have the option to either stream the response (stream_data = true
) or wait for the entire result to be generated (stream_data = false
).
Example:
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
myHeaders.append("Authorization", "Bearer sk-02e44d********");
var raw = JSON.stringify({
"question": "Your input prompt",
"model": "aicon-v4-alpha-160824",
"randomness": 0.5,
"stream_data": true,
"training_data": "Add your training data or system messages",
"response_type": "text",
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://api.worqhat.com/api/ai/content/v4", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
Example:
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
myHeaders.append("Authorization", "Bearer sk-02e44d********");
var raw = JSON.stringify({
"question": "Your input prompt",
"model": "aicon-v4-alpha-160824",
"randomness": 0.5,
"stream_data": false,
"training_data": "Add your training data or system messages",
"response_type": "text",
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://api.worqhat.com/api/ai/content/v4", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
</Tab>
Key Points:
- Replace
YOUR_API_KEY
with your actual WorqHat AiCon V4 API key. - Adjust the
model
parameter to select the desired model. - Modify the
prompt
to provide your text input.
Try out other Capabilities of the WorqHat API
- Build Multiturn Conversations or Chat Applications by having the model maintain history by default Text Only Conversational
- Generate text or code outputs from mutimodal inputs (including text, images, PDFs, video, and audio)Multimodal Input (Non-Streaming)
- Fetch real world information and data using the Alpha Models Alpha Models (Text Only)
Request
The message or question you want to pass to the model.
ID of the model to use. See the model endpoint compatibility table for details on which models work and are currently supported.
What model prediction randomness to use, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
This is a system combination of the values used internally and is balanced between nucleus sampling and temperature sampling.
If set, partial message deltas will be sent, to reduce the waiting time. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
Having a 750K fixed context window and with no limitations on the input, you can pass any sort of training data ir system messages that you want the model to follow when answering to your questions.
Example:
You will be provided with customer service queries. Classify each query into a primary category and a secondary category. Provide your output in json format with the keys: primary and secondary.
Primary categories: Billing, Technical Support, Account Management, or General Inquiry.
Billing secondary categories:
- Unsubscribe or upgrade
- Add a payment method
- Explanation for charge
- Dispute a charge
Technical Support secondary categories:
- Troubleshooting
- Device compatibility
- Software updates
Account Management secondary categories:
- Password reset
- Update personal information
- Close account
- Account security
General Inquiry secondary categories:
- Product information
- Pricing
- Feedback
- Speak to a human
An object specifying the format that the model must output. Compatible with all AiCon V4 models.
Setting to { "response_type": "json" } will enable the model to send back structured outputs which ensures the model will match your supplied JSON schema in the message.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via the training_data or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="DONE", which indicates the generation exceeded maximum tokens or the conversation exceeded the max context length and you will be billed for the whitespace.
Every interaction with the Language Model is associated with a Conversation Id. To let the model maintain a history of your conversations by default, you can pass the conversation id. You can define a conversation id on your own or you can just use the conversation id that is returned as a response in the first conversation itself. Its preffered that you let the model generate the conversational id on it own and then use it further to maintain a conversation
This is used in case you want to set a Conversation history on your own. In this scenario, you don't need to pass the Conversation Id, you just need to set preserve_history
to true
and pass the conversation as an array of objects. In the objects the Key is the User's Question or Prompt and the Value is the Model Answer
Keep a track of the past conversations manually.
Model Output 1
Model Output 2
{
"question": "hiii there",
"model": "aicon-v4-nano-160824",
"randomness": 0.5,
"stream_data": false,
"training_data": "You are alex and you are one of the best Tour Guides. answer everything while starting with your name",
"response_type": "text"
}
Request samples
Responses
The response output of the processing.
The amount of time in miliseconds it took to complete the request.
A unique identifier for the server process. This helps us track support requests and complaints.
Usage statistics for the request. This is what is used for the billing.
A unique identifier for the Message. You can use this conversation id later when building multiturn chat applications. This conversation id is what keeps a track of your past conversation. You can use this to keep continuing your requests one after the other without the hassle of maintaining the conversation history on your own.
The model used for the process.
{
"content": "Hi there! My name is Alex, and I'm happy to help you with anything you need for your tour. What can I do for you today? \n",
"processingTime": 3105.397454,
"processingId": "8aa97481-20f9-48f4-a12d-d02eb6c1d62a",
"processingCount": 84,
"conversation_id": "conv_1724236791746",
"model": "aicon-v4-alpha-160824"
}