What advanced AI models does Perplexity Pro unlock?

Perplexity Pro gives you access to the latest models from OpenAI and Anthropic. Here’s a breakdown of each model (the list changes often). All context windows of around 32k tokens are the same for our models.

Default: Our default model has been optimised for the fastest results and for web browsing with dedicated fine tuning to ensure it performs best with quick searches.

GPT-4 Turbo: Open AI’s famous model that powers ChatGPT, renowned for it’s reasoning and natural language processing capabilities, displaying human-level performance on various professional and academic benchmarks

Claude 3: We offer both Sonnet and Opus models, Opus has very advanced capabilities and is considered the most advanced LLM available.

Experimental: Our in house model, optimised conciseness and less restrictive responses to any query.

Mistral's Large: According to Mistral. "It reaches top-tier reasoning capabilities. It can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.”

The best approach to decide which models is best suited for you, would require exposure and testing and comparing queries to fully understand the models. In the end, the right choice depends on considering all the evidence in terms of your specific needs and goals.

  • What’s a token?

A token is the smallest unit into which text data can be broken down for an AI model to process. Tokens serve as the bridge between raw human language and a format that AI models can understand and generate. You can use this tokeniser to understand how many characters equal a token.

  • Are there specific guidelines when using any third party models like Claude or GPT?

No, you can use the models the same way you would use them in their native chatbot environment.