Troubleshooting

Error: Token limit exceeded. Please shorten your input, or try switching to a different model.

  • 31 May 2024
  • 0 replies
  • 121 views

Token limit exceed. Please shorten your input, or try switching to a different model.
 

This error message signifies that the input text you've submitted to the language model contains more tokens than the model is configured to handle in a single operation. 
 

What is a Token?

A token in the context of LLMs is essentially a piece of text that the model recognizes as a discrete unit. This can be a word, part of a word (like a syllable or a root word), a punctuation mark, or even a space. Tokens are the building blocks that LLMs use to understand and generate text.
 

What are Token Limits

Token limit, or a "context window", refers to the maximum number of tokens that an LLM can process in a single request. These limits are determined by factors like the model's architecture and computational considerations, which ensure that it can process requests efficiently without overloading its capacity.

Exceeding the token limit results in the LLM being unable to process the input as it goes beyond the model's designed capacity for analysis and response generation. The error message is the system's way of informing you that your request cannot be processed due to this limitation.
 

What to Do About It

To resolve this issue, you have a few options:

  1. Shorten Your Input: Revise your input text to reduce the number of tokens it contains. This often involves summarizing content or splitting a large document into sections that can be processed individually.

  2. Switch to a Different Model: Some models offer higher token limits or a larger context window than others. If your task requires large inputs, consider editing the action configuration and switching to a model designed to accommodate bigger input sizes.

    • Select the action in the workflow to configure it

    • Select Advanced Settings

    • Click the Model dropdown and select a different model. See below for more information on token limits for specific models.

Model Token Limits:

Here are the token limits for some commonly used models used on the Copy.ai platform:

  • Anthropic: Claude 3 (Opus/Sonnet/Haiku): 200,000 Tokens

  • Anthropic: Claude 2.0/2.1: 200,000 Tokens

  • OpenAI GPT-4o (Omni): 128,000 Tokens

  • OpenAI GPT-4-Turbo: 128,000 Tokens

 


0 replies

Be the first to reply!

Reply