On Tuesday, the Indian AI research facility, Sarvam, presented its next-gen collection of expansive linguistic models. The company anticipates that these compact, effective open-source AI models can secure a portion of the market, competing with the costlier platforms provided by its significantly bigger American and Chinese competitors.
The debut, revealed during the India AI Impact Summit held in New Delhi, is consistent with the Indian capital’s initiative to lessen dependence on overseas AI systems and to customize models for indigenous languages and specific applications.
Sarvam disclosed that its latest range features 30-billion and 105-billion parameter systems; a model converting text into spoken words; a model for transcribing spoken words into text; and a visual interpretation model for analyzing documents. These represent a significant enhancement over the company’s 2-billion-parameter Sarvam 1 model, which was introduced in October 2024.
The 30-billion- and 105-billion-parameter models employ a mixture-of-experts structural design, which engages merely a segment of their total parameters concurrently, thereby considerably lowering computational expenses, the firm stated. The 30B model accommodates a 32,000-token context window, intended for live interactive applications, whereas the more expansive model provides a 128,000-token window to facilitate more intricate, multi-stage analytical operations.
Sarvam reported that its latest AI models underwent training from inception, instead of being refined using pre-existing open-source platforms. The 30B model received preliminary training on approximately 16 trillion tokens of text, meanwhile, the 105B model underwent training with trillions of tokens covering numerous Indian languages, the company mentioned.
The models are engineered to facilitate live applications, according to the nascent firm, such as audio-driven helpers and conversational platforms in indigenous Indian tongues.

The nascent firm indicated that the models utilized computational assets supplied through India’s state-supported IndiaAI Mission, receiving infrastructural backing from the data center provider, Yotta, and technological assistance from Nvidia.
Techcrunch event
Boston, MA
|
June 23, 2026
Leaders at Sarvam stated that the company intends to adopt a cautious strategy for expanding its models, prioritizing practical uses instead of mere scale.
“We want to be deliberate in our approach to expansion,” Pratyush Kumar, Sarvam’s co-founder, remarked during the introduction. “We don’t want to engage in unthinking expansion. We aim to comprehend the critical functions at an expanded level, and then develop solutions accordingly.”
Sarvam announced its intention to make open-source the 30B and 105B models; however, it did not clarify if the training data or full training code would similarly be publicly accessible.
The company also detailed its strategies for developing bespoke AI frameworks, such as models geared towards coding and business-oriented utilities under an offering named Sarvam for Work, alongside an interactive AI assistant platform called Samvaad.
Established in 2023, Sarvam has secured over $50 million in capital, with Lightspeed Venture Partners, Khosla Ventures, and Peak XV Partners (previously Sequoia Capital India) listed among its financial backers.
{content}
Source: {feed_title}
