Skip to main content
All CollectionsChats in CoCounsel Core
What are tokens and context windows?
What are tokens and context windows?

Learn about the terms “tokens” and “context windows”

Ryan Groff avatar
Written by Ryan Groff
Updated over 7 months ago

A. Overview


Large language model (LLM) artificial intelligence can understand concepts. This is possible because LLMs use multidimensional neural networks to map the semantic relationship between different terms and ideas.

For example, let’s say one dimension of an artificial neural net is represented by the question “How animal is this?” In this simple example, the terms “dog” and “sheep” would have a higher animal value than “violin.”

This is similar to how the human brain organizes and retrieves information.

Whereas traditional computing required humans to convert concepts into computer codes, today’s AI can understand plain language requests without this conversion. As a result, the processing power of today’s AI uses a different unit of measurement, the token. A token represents characters that are grouped together for some semantic use.

For example, a short word with multiple characters could be represented by 1 token, but a comma in the middle of a phrase might also be represented by 1 whole token. So consider this sentence.

“This is the root cause of antidisestablishmentarianism; a religio-social concern.”

AI would likely understand this sentence as 19 separate tokens.

Generally, one token corresponds to about 4 characters of English text, which is about ¾ of a word. So 100 tokens is equal to about 75 words.

The number of tokens that AI considers at any given time is called the context window.

When the context window limit is reached–meaning, the AI has considered the maximum amount of tokens it can consider at any given time–then the earliest tokens are forgotten as new tokens arrive.

B. Understanding CoCounsel’s limitations


Tokens and context windows can help you understand CoCounsel’s limitations during chat conversations–meaning, when you are not using a skill–and during skill refinements.

Various factors determine when a context window has been exceeded, so it is not possible to say, for example, how many chats or how many skills will always fit within this window. Generally, you can expect CoCounsel to work like this.

1. Chats

For chats with CoCounsel, the context window is 16,000 tokens, or about 12,000 words and about 48,000 characters, including both your requests and CoCounsel’s responses.

  • This is why CoCounsel may fail to process instructions where the request (what you enter) and response (the computation CoCounsel completes) require more tokens.

  • This is also why CoCounsel may not be able to understand a request that references information that is too far back in the chat conversation—meaning, information that is outside of its context window.

2. Skill refinements

When you review CoCounsel’s full response for Legal Research Memo (CoCounsel All-Access only) or Draft Correspondence, and then select “Refine results,” CoCounsel will only refine the instant response. This means CoCounsel cannot consider requests or responses from chat conversations or from other skills. But you may use the chat to request that CoCounsel refine its results from a prior skill.

Did this answer your question?