AiCon V2 Large Model (Beta)
Content Generation AI focused only on Business Content Generation Purpose.
Large Content Generation AI Model. This model is designed to generate large amounts of content based on the provided parameters. You can provide a pre-trained model to generate content on top of it. This is mostly used for use-cases where you want to run a model based on your pre-trained dataset. It can be used to generate content for a variety of use cases where the content is more creative or complex.
Read more at https://docs.worqhat.com/ai-models/text-generation-ai/aicon-v2-large-beta
Parameter | Type | Description | Default |
---|---|---|---|
question | string | This parameter represents the question or prompt for which the content is to be generated. It should be provided as a string. If not provided, the default value is undefined . | |
datasetId | string | This parameter represents the dataset ID to be used for generating the content. It should be provided as a string. If not provided, the default value is undefined . | |
conversation_history | Array | This parameter accepts an array of objects. Each object represents a previous interaction in the conversation. This is useful for context-aware content generation. If not provided, the default value is undefined , meaning no conversation history is considered. | |
preserve_history | boolean | This is a flag that indicates whether the conversation history should be preserved for future interactions. If set to true , the conversation history is preserved. If set to false or not provided, the default behavior is not to preserve the conversation history. | |
instructions | string | This parameter represents the training data to be used for generating the content. It should be provided as a string. If not provided, the default value is undefined . | |
randomness | number | This parameter controls the level of randomness in the generated content. It accepts a number between 0 and 1, where 0 means no randomness and 1 means maximum randomness. If not provided, the default value is 0.2 . | |
stream | boolean | This is a flag that indicates whether the response should be streamed or not. If set to true , the response is streamed. If set to false or not provided, the default behavior is not to stream the response. |