Straico's Generative AI Models

Your Comprehensive Guide

Welcome to the Straico AI Model Resource Center — your definitive encyclopedia for navigating our diverse array of generative AI models. Whether you’re spearheading startup ventures, orchestrating powerful marketing narratives, or simply indulging your curiosity in AI, this center equips you with the knowledge to choose the ideal AI tools tailored to your aspirations.

Compare AI Models at a Glance

Our AI Comparison Table presents a straightforward view of key attributes for each model, emphasizing cost, capabilities, and more, to swiftly guide your selection.

Model NameEditor's ChoiceMax Words (approx)Coins 🪙 per 100 WordsTypeParameters (in billion)Capabilities
OpenAI: GPT-4 Turbo 128K 👑96,0008Proprietary1,000📃Text
Anthropic: Claude v2.1 👑150,0008Proprietary137📃Text
Gryphe: MythoMax L2 👑6,0001Open Source13📃Text
Mistral: Mixtral 8x7B Instruct (beta) 👑24,0002Open Source56📃Text
Perplexity: Sonar 8x7B Online 👑3,0001Proprietary70📃Text
Anthropic: Claude 3 Opus-150,00024Proprietary2,000📃Text
Anthropic: Claude 3 Sonnet-150,0005Proprietary70📃Text
Anthropic: Claude 3 Haiku-150,0001
OpenAI: GPT-3.5 Turbo 16K-12,0000Propietary13📃Text
OpenAI: GPT-4 8K-6,00020Proprietary1,000📃Text
OpenAI: GPT-4 Vision-96,0008Proprietary1000📃Text
Anthropic: Claude Instant v1-75,0002Proprietary93
Google: PaLM 2 Bison-24,5001Open Source340📃Text
Google: Gemini Pro (preview)-98,2801Proprietary540📃Text
Meta : Llama 3 8B Instruct-6,0000.5Proprietary8📃Text
Meta : Llama 3 70B Instruct nitro-6,0001Proprietary70📃Text
Mistral 7B Instruct v0.1 (beta)-3,0001Open Source7📃Text
Dolphin 2.6 Mixtral 8x7B-24,0001Open Source56📃Text
Goliath 120B-4,6085Open Source120📃Text

* Models marked with a 👑 are our Editor’s Choices, selected for their proven effectiveness in practical use on Straico.

Explore Editor’s Choice LLMs

Browse our Editor’s Choice tabs, a collection born from our team’s extensive efforts to evaluate AI models through empirical use rather than technical specifications. Here, discover the practical merits and constraints each selected model offers. We also provide links to shared chats and prompt templates, allowing you to experience and test their effectiveness firsthand on the Straico platform.

OpenAI: GPT-4 Turbo 128k

Open AI GPT-4 Turbo 128k is a propiertary large language model (LLM) developed by OpenAI. This model is an evolved version of GPT-4 and GPT-3.5 Turbo 16k, which is one of the most recognized LLMs in the world.


Consistent with complex questions, organized and structured answers, formatting, content generation, complex reasoning.


Not up-to-date information, multiple guardrails, exhibit biases, hallucinations

Chat examples:

Short but deep questions

Testing the model with long contexts

Prompt templates examples:

Design advisor for social networks posts

Preview/article combo for social networks

Anthropic: Claude v2.1

Anthropic Claude v2.1 is a propiertary large language model (LLM) developed by Anthropic, specialized in handling complex multi-step instructions over large amounts of content. Claude v2.1


Suitable for very large context and files, elaborated analysis from many sources of information, very good with complex reasoning.


Not up-to-date information, multiple guardrails, exhibit biases, hallucinations, refuses to promote “unsafe” conversations.

Chat examples:

Summarizing a large tale

Gryphe: Mythomax L2 13B 8K (beta)

Mythomax L2 13B 8K is an open source large language model (LLM) created by Gryphe, that specializes in storytelling and advanced roleplaying. It is built on the foundation of the Llama 2 architecture and is a part of the Mytho family of Llama-based models, which also includes MythoLogic and MythoMix. The MythoMax L2 13B variant is an optimized version of MythoMix, incorporating a more comprehensive tensor merger strategy that increases coherency and performance.


Roleplaying, storytelling, uncensored, low-priced.


Not suitable for extremely large contexts, not always enough detailed answers.

Chat examples:

Simulating seller-customer interaction

Invent a story

Prompt templates examples:

Sell to me!

The Therapist

Mistral: Mixtral 8x7B

Mistral: Mixtral 8x7B is an open source language model (LLM) developed by Mistral AI

According to the creators, Mixtral 8x7B outperforms other well-known LLMs such as Llama 2 70B and GPT-3.5 in several benchmarks, making it one of the most powerful open-source models available.

Mixtral 8x7B has been praised for its cost-effectiveness and creative text formats for storytelling and roleplaying.


Storytelling, role playing, suitable for long contexts.



Chat examples:

Famous character roleplaying
Simulating a job interview

Prompt templates examples:

Interview a celebrity
Be interviewed for a job!

Perplexity: Sonar 8x7B Online

Perplexity: Sonar 8x7B Online is a propiertary large language model (LLM) developed by Perplexity AI, which provides real-time access to the internet and up-to-date information


Up-to-date information, detailed answers to short prompts.


Not always suitable for very large contexts.

Chat examples:

Up-to-date calls
Consultancy online

Prompt templates examples:

Business Consultant to Start Your Company
Chasing Calls from Accelerators and VC

Understanding Costs and Interactions

What does 'Max Words' refer to in an AI interaction?

‘Max Words’ refers to the maximum number of words that can be processed by an AI model in a single interaction. This limit includes all chat input, chat output, and the content from any attachments used during the interaction.

How is the cost calculated for text interactions in terms of coins?

The total cost for an interaction in coins is determined by the combined word count of the chat input, chat output, and any included attachment content. The ‘Coins 🪙 per 100 words’ rate specific to the selected AI model will apply.

What is considered when calculating the total word count for an interaction with attachments?

When attachments are used, the total word count includes all text from the attachments—whether files or URLs —along with the whole conversation in the chat thus far, plus the new message and AI’s anticipated response.

Which types of attachments can be included, and how do they factor into the word count?

Various attachment types such as .docx, .txt, .pptx, .xlsx, .pdf files, YouTube video URLs, and web pages can be included. Each contributes to the overall word count that the AI model processes, which in turn affects coin cost.

Are there cost-effective strategies for engaging with AI models on Straico?

Yes, there are several strategies for cost-effective interactions. Starting new chat sessions can reduce word count by clearing previous context. For those seeking value, Straico provides models like Gemini Pro in the proprietary realm and Mixtral 8x7B in the open-source domain, both at an economical rate of 1 coin per 100 words. Furthermore, Straico offers GPT 3.5 Turbo, which remains free of charge, allowing unlimited interactions without impacting your coin balance.

What are the estimated costs for images based on size?

Certainly! Pricing for image generation on Straico varies depending on the size of the image and the AI model chosen. When generating images with DALL·E 3 via our chat assistant, the coin cost is approximately 210 for high-resolution images of 2048×2048 pixels, and around 20 coins for smaller, 512×512 pixel images.

For those opting to use Stable Diffusion in our dedicated image generation section, we offer straightforward pricing—regardless of the image size, each creation is just 20 coins.

With both models delivering impeccable quality, you can choose the best option that matches your requirements and budget.

Can you explain what 'image input capability' and 'real-time capabilities' signify in relation to the AI models on Straico?

‘Image input capability’ refers to the ability of an AI model to accept and interpret visual information. For instance, with our GPT-4 Vision model, you can upload an image directly into the chat assistant and ask queries about its content, just as readily as if you were discussing text. This means the model can analyze the image and engage in a detailed dialogue about what it depicts.

‘Real-time capabilities’ denote the model’s prowess in leveraging the latest information available up until the moment of interaction. This is exemplified by our Perplexity 70B online (PPLX 70B), which can pull in the most recent data to enrich its responses, ensuring you’re receiving the most up-to-date content possible. This feature is invaluable for tasks requiring current knowledge and insights.