Skip to content Skip to footer

Gemini 1.5, long context window?

On the 8th of June 2021, made an announcement regarding their next-generation Gemini model, called Gemini 1.5. This model boasts several improvements, including speed and efficiency. One of the key innovations of the Gemini 1.5 is its long context window. The context window measures the number of tokens the model can process at a given time. Tokens are the smallest building blocks of a word, image, or video.

To help understand the significance of this milestone, reached out to their DeepMind project team for an explanation of long context . These are crucial because they help models remember information during a session. Similar to how humans might forget someone's name in the middle of a conversation, models can struggle to remember information, too. You may have experienced a forgetting information after a few turns. This is where long context come in handy.

Previously, Gemini could process up to 32,000 tokens at once. However, the first 1.5 model, 1.5 Pro, available for early testing, can process up to 1 million tokens, making it the foundation model with the most extended context window to date. The team even tested up to 10 million tokens in their research. The longer the context window, the more text, , audio, code, or video the model can take in and process.

Nikolay Savinov, DeepMind Research Scientist and a research lead on the long context project, revealed that the original plan was to achieve 128,000 tokens in context. However, they decided to set an ambitious goal of 1 million tokens, which they have now surpassed by ten times in their research.

The team had to create a series of deep-learning innovations to achieve this breakthrough. Each breakthrough led to another, and when they all combined, the team was surprised by the possibilities. The raw data 1.5 Pro can handle opens up new ways to interact with the model. For example, instead of summarizing a document that is dozens of pages long, it can now translate documents that are thousands of pages long. Furthermore, 1.5 Pro can analyze tens of thousands of lines of code simultaneously, a significant improvement from the previous model's ability to analyze thousands of lines.

Want to read more? Check out the original article available at The Keyword!

Read More

Leave a comment

Newsletter Signup
Address

The Grid —
The Matrix Has Me
Big Bear Lake, CA 92315

01010011 01111001 01110011 01110100 01100101 01101101 00100000
01000110 01100001 01101001 01101100 01110101 01110010 01100101

Kid, don't threaten me. There are worse things than death, and uh, I can do all of them.The Plague

Deitasoft © 2024. All Rights Reserved.