Neuralfinity

Introducing... InfinityContext

2 min read

Authors: Jannik Malte Meissner & Yuliia Butovchenko

Introducing... InfinityContext

Wikipedia articles, scientific papers, long news stories: Summarising these with AI tools was no simple tasks. Often enough, models would run into their context limit (the amount of tokens a model can take into consideration as part of a prompt).

While a few technical implementation details of our own large language model, that we use to power Magic-Summary, already made us a leader when dealing with longer texts, we are proud to announce a new leap forward:

In August of 2022 we managed to accomplish nothing short of a research breakthrough. We found a way to feed prompts without length limitation to a modified version of our first language model. After extensive testing and further development it is now time to finally present InfinityContext.

Starting today, InfinityContext is available for all customers who sign up for a Pro or Enterprise Plan via our developer portal.

What this means in practice, is that Magic-Summary can now accept texts of any length as input and deliver reliable, high quality summaries.

Beyond that, we are thrilled to tell that we are currently engaged in the development of an exciting new product, and one of its fundamental features revolves around the concept of unlimited context. One of the key aspects of this product is the recognition that context length plays a crucial role in shaping the user experience, and even more so, the value LLMs can provide. Overcoming the limited context window, allows us to make text predictions that are can be enriched with a lot more relevant data, leading to a more seamless and satisfying user journey. This acknowledgment sets the stage for the team to delve deeper into the intricate workings of the product and explore innovative approaches to leverage context length effectively.

That was a glimpse to the future, but for now, we are pleased to present our users with a Magic-Summary that breaks free from the limitations of context length. With our API, you can bid farewell to the frustration of encountering errors due to excessively long files, a common inconvenience faced in the past.

We are committed to delivering a seamless user experience, and your feedback will contribute to our continuous improvement. Don't hesitate to explore the boundless potential of our API and share your thoughts with us today.