With AI and open source innovation reshaping enterprise infrastructure, NxtGen is redefining control and sovereignty in technology. In an exclusive conversation, A S Rajgopal shares with EFY’s Rahul Chopra and Akanksha Sondhi Gaur how the M platform, Indian talent, and GPU-powered AI use cases are driving a future that’s smarter, sovereign, and India-focused.

Q. What is NxtGen, and what does its name signify?
A. The name NxtGen was never about next-generation technology alone. When we started around 2012, there was a lot of discussion about how companies rarely survive beyond a couple of decades. Our thinking was different; we wanted to build a company that would be aspirational for the next generation. Ideally, not just us, but our children and their peers should want to work here. That thought led to defining core values that went beyond technology. One such value was what we call ‘deep affection’, the same care and commitment with which you work with your children. The name NxtGen reflects that aspiration: building something enduring, values-driven, and meaningful for future generations.
Q. How do you explain your solutions to a non-technical CEO?
A. We position ourselves as a sovereign infrastructure services provider. From day one, we were clear that India is a large enough market to justify building deep, indigenous technology capabilities. Sovereignty is not a recent buzzword for us; we have been thinking along these lines since inception. We embed sovereignty across cloud infrastructure and now across AI as well.
We do not commercially depend on global hyperscalers, but we actively collaborate with the global open source community. We bring open technologies onto infrastructure we own and control, ensuring full authority over data, operations, and technology. Our value proposition is consistent: deliver superior performance at a lower cost, through deep engineering rather than pricing gimmicks.
Q. How do you compete with hyperscalers like AWS, Azure, or Google Cloud?
A. We compete head-on. In fact, we have migrated more than 400 enterprises from AWS to NxtGen through a program we internally call ‘Rescue from AWS.’ Hyperscalers are excellent for startups due to free credits and deep integrations. However, once businesses scale, costs often become prohibitive. Our sweet spot is medium to large enterprises. We encourage startups to build on AWS, Azure, or Google Cloud but run their scaled operations with us. We provide complete infrastructure, migration support, security, and managed services.
Q. Are you simply a data centre company, or is your role broader?
A. While we operate five data centres today, we are not a co-location provider. Customers hand over their applications to us, and we deploy, secure, scale, and manage them. For example, for the Election Commission, TCS develops the application, and we take it from User Acceptance Testing (UAT) to production, ensuring scalability and security. That full-stack operational responsibility defines us.
Q. What is the M platform, and why did you build it?
A. In the AI space today, massive global investment and limited access to advanced technology and top talent especially in India create real constraints. We were clear that we could not compete on scale, so our focus was on how to compete meaningfully. We identified two priorities: first, helping customers actually adopt AI by building real, usable use cases; and second, deciding what we would leverage to build those capabilities.
Our answer was to combine three core strengths: GPU infrastructure we own, open source and open-weight models comparable to today’s frontier models, and Indian engineering talent. The M platform is built on this foundation, making it inherently sovereign. It brings together Indian-owned GPUs, open source AI, and local talent to deliver practical AI solutions. M functions as a common platform that hosts multiple AI use cases under a single interface.
Q. What kinds of applications run on M today?
A. M is a multi-agent platform. For example, we are working with InterGlobe Technology Quotient (ITQ) using Travelport data, which covers bookings across 400 airlines and over 650,000 hotels, to reimagine travel experiences with AI. We have also onboarded Indian open source models, including language and domain-specific models.
Q. Is M trying to compete with ChatGPT, DeepSeek, or other frontier models?
A. Competing head-on with ChatGPT isn’t the goal, you can’t beat free with free. M is built for utility, not generic conversation, with a sharp focus on India-specific needs. Unlike global, general-purpose models, M offers deep contextual understanding of Indian law, finance, language, and workflows. At its core, M is agent-driven. A single interface understands user intent and routes each query to the most relevant domain-expert model, enabling task execution rather than just dialogue.
Key examples include an Indian legal model trained on all Indian laws and court judgments from 1969 to September this year, aimed at helping citizens understand real-world legal rights, with endorsement being sought from the Ministry of Law. M also features a financial assistant capable of answering queries at a chartered-accountant level, with plans to support actions like income-tax filing. Overall, M stays practical, India-centric, and purpose-built rather than competing as another chatbot.
Q. What does the name ‘M’ signify?
A. Alphabet letters cannot be copyrighted, which makes them globally usable. Philosophically, ‘M’ represents the first sound a human makes at the beginning of interaction with the world. That symbolism resonated with us as the starting point of human intelligence and communication.
Q. How central is open source to the M platform?
A. Open source powers everything (of M) from frontend to backend. None of the core components are proprietary. This allows us to continuously adopt the best models for reasoning, coding, or specific tasks. Instead of being locked into one model’s limitations, we curate the best tools available. Where required, we fine-tune and domain-adapt models for Indian contexts. This approach dramatically reduces cost and accelerates innovation.
Q. Can developers contribute to M? How does the community engage?
A. Yes. Contributions happen via GitHub and through communities embedded in our portal. The strongest engagement today is around our coding agent, which integrates directly into IDEs like Visual Studio. Developers contribute feedback, improvements, and testing insights. We also collaborate globally through forums such as the AI Alliance, exchanging knowledge and best practices across borders.
Q. There is growing concern that open source AI can be misused. What is your view?
A. AI is like electricity; it should belong to everyone. While open source can be misused, the real constraint isn’t code but compute, which is scarce and tightly controlled. Running AI at scale requires significant infrastructure, making unrestricted misuse difficult. AI is already being weaponised, but not because of open source alone. The bigger risk is global polarisation, where a few countries, mainly the US and China, dominate AI, while most of the world is left behind.
With only around 20–22 countries having meaningful GPU capacity, the rest risk becoming digital colonies of powerful governments or corporations. Open source, combined with sovereign AI infrastructure, is one of the few ways to democratize capability and prevent this imbalance.
Q. Has the government supported your AI initiatives?
A. We didn’t seek government support for M because building it isn’t overly complex. Our engagement with the government is more focused on specific problem statements, such as AI-driven cybersecurity. Traditional GenAI models struggle here due to limited context windows, while cybersecurity analysis involves millions of logs at a time.
We have applied for India AI Mission grants to address this challenge, though approvals are still pending.
More broadly, our role today is as a GPU provider, supporting multiple initiatives being built on that infrastructure. M itself doesn’t depend on government backing, but for consumer adoption, the real challenge is delivering a seamless, world-class experience, something as intuitive and frictionless as WhatsApp before taking it to scale.
Q. How do you ensure AI safety and prevent misuse of M?
A. We use open source models but keep them fully quarantined on internet access, no memory, no data accumulation, and no data sent out. Models only process input and return output, ensuring user safety. To reduce hallucinations, we use strict domain specialisation: each query is handled by an expert model trained only on a specific Indian knowledge set. If a question is outside its domain, it won’t answer. M is a mixture of focused expert models, prioritising accuracy, security, and trust unlike AGI-driven systems that aim to accumulate broad knowledge.
Q. What is the business model behind M?
A. Our AI business is in hypergrowth and is set to surpass our 13-year-old cloud business within a year. AI is now embedded everywhere, and even riding this wave is creating strong value. Our revenue model varies by use case: in travel, we act like a travel agent and earn commissions; professional tools like coding assistants are charged hourly; while general consumer services are kept free. For legal AI, basic advice will remain free, but specialised tasks like notice or document review may be charged. On the enterprise side, corporate legal AI is already priced at ₹100 per user.
Q. What features of M are currently available to users?
A. The mobile app offers the most complete experience today and is recommended for anyone who wants to try it. Some guardrails are intentionally in place, especially for sensitive domains like healthcare, for example, to avoid unsafe medical advice. Features will continue to evolve as safety, reliability, and user experience mature. Users can explore the mobile version now, and more capabilities will be rolled out over time.
Q. How significant is your GPU investment?
A. We recently invested ₹36 billion in GPU infrastructure, which is expected to be fully subscribed even before deployment. GPUs are in massive demand. While use-case revenues will take time to match infrastructure revenues, we are already executing over 153 AI projects across industries, including automotive, manufacturing, and government.
Q. How did you go about building your AI capabilities?
A. Our cloud business continues to fund the company and pays everyone’s salaries, but about two years ago, we made a deliberate long-term bet on AI. We assembled a team of around 120 young engineers, invested nearly ₹500 million in GPUs, and gave them a full year to experiment internally without customer pressure. Their first work focused on applying AI to our own systems and internal enterprise chat platform, where a bot could understand and respond using three years of conversations, and a secure document repository where users could query and extract intelligence from their files. That year was about learning and capability-building, with strong support from partners such as NVIDIA, AMD, and Red Hat. Once the team was ready, we took these capabilities to customers. Over the last year alone, this has translated into more than 150 AI projects delivered.
Q. How are you approaching sustainability and energy for data centres?
A. We are net zero even today, based on an earlier commitment, largely through renewable energy. However, this is not structurally perfect, as night-time operations still rely on grid and banked power. With 4000 GPUs being deployed and room for another 4,000, we will soon need a new facility. GPU density has fundamentally changed data centre design; what earlier needed 30 racks now fits into one, dramatically increasing power and cooling requirements.
For long-term sustainability, we are exploring alternative energy models, including discussions with players such as Rolls-Royce and Westinghouse on Small Modular Reactors (SMRs), with deployment targeted for around 2030. While SMRs are not yet cost-competitive, nuclear remains the only clean, consistent energy source at scale. In the near term, India’s solar capacity combined with underutilised thermal power can support 9–10 GW of data centre demand; beyond 10–20GW, nuclear power becomes unavoidable.
Q. Where is the data centre upgrade happening, and how are you addressing the talent gap for liquid cooling?
A. The GPU deployment is currently limited to a single location in Bengaluru because we are using liquid cooling, which is very different from traditional air-cooled data centres. The entire design and maintenance philosophy changes when water is taken directly to the chip. Power densities have risen sharply, and what once needed 30 racks can now fit into one making liquid cooling inevitable. India currently lacks skilled talent for this technology, as this is the first such deployment at scale.
Q. How are you getting the talent to execute such a programme?
A. There are no academic programs or institutes offering training, so we are building capability from scratch by learning abroad, working closely with OEMs, and training teams in-house. The first trained batch should be ready by mid-year, after which expansion to other locations will be possible. Going forward, liquid cooling will not be limited to GPUs. As CPUs become more powerful and run AI and general-purpose workloads, they too will require liquid cooling, making talent development critical across data centre infrastructure.
Q. How do you see AI chip competition evolving beyond NVIDIA? Any early bets on challengers?
A. There is clearly a race underway among the large global players, and it’s something to watch unfold. They have the capital and talent to experiment at scale. Beyond them, several startups such as those building inference-specific accelerators are trying to compete, but they face a fundamental challenge: volume. While their hardware can technically match GPUs for inference, low production volumes make it hard to achieve competitive pricing, which is critical in the electronics industry.
My view is that many of these players may ultimately be overtaken by CPUs. CPU vendors are steadily embedding deep-learning capabilities, similar in concept to those of TPUs today. Most enterprise AI workloads run on relatively modest models, often one to two billion parameters which already perform well on CPUs. Over the next two to three years, general-purpose inference is likely to shift largely to CPUs, while large-scale training will continue to rely on GPUs and a small number of specialised accelerators.
Q. Why is sovereignty so important to you, especially in AI?
A. It’s both personal and strategic. Early in my career, I realised that world-class opportunities could be created in India, I didn’t need to leave the country to succeed. That shaped my belief that India can and should build its own technology capabilities. Later, seeing how much personal data global platforms collect tracking behaviour, location, and usage highlighted serious privacy and control concerns. As AI becomes more powerful, this concentration of data and capability becomes even more critical. For me, sovereignty means building strong, capable technology in India that protects data, serves local needs, and gives us control over critical systems. It has never limited us; it has enabled everything we’ve built.
Q. What are your views on India building its own foundational AI models?
A. India’s AI mission is a well-thought-out and timely step, and it’s still too early to judge outcomes. India should absolutely build its own AI models, but not by reinventing the wheel. The real gap isn’t just GPUs or research scale; it’s missing the Indian context. Global models trained largely in English fail to capture India’s languages, cultural relationships, and societal nuances. Building indigenous models, even if early versions are imperfect, is essential to embed this context.
The first few iterations will be weak due to talent and funding constraints, and startups alone can’t build foundation models. But with patience, successive iterations can reach open source competitiveness. These models are especially critical for sovereign use cases such as defence, government, healthcare, and finance, where data control and jurisdiction matter. Expectations must be realistic, but the strategic value is undeniable.
Q. Is India’s funding sufficient to build a global AI ecosystem?
A. Not really. Compared to global peers, India lacks true risk capital for exploratory and long-horizon AI research. While the government has taken a positive step by committing over a billion dollars in GPU subsidies, private funding in India is largely growth- and profitability-driven. Investors are hesitant to back experimental or “esoteric” AI work, and there is still concern about an AI bubble. As a result, India is unlikely to compete head-on in the large foundational LLM race, which requires massive, sustained risk capital. However, that is not where our strength lies. Applying AI to specific, real-world problems, often using smaller, task-focused models is a much more achievable and impactful opportunity. India can lead in applied AI across education, governance, citizen services, and industry use cases. We may not win the LLM arms race, but we can build meaningful, scalable AI solutions that the world can learn from.
Q. Can India compete globally under open source sovereign AI pressure?
A. We cannot compete by copying Western approaches or spending at the same scale. Platforms like DeepSeek show that innovative architectures such as mixture-of-experts models can achieve high performance at much lower cost than brute-force methods like ChatGPT. India’s path lies in thinking differently: developing new mathematical models or approaches that solve the same problems without replicating the West’s resource-intensive methods. Limited resources force us to innovate, leveraging brains over muscle. Inspiration from countries like China shows that ingenuity can overcome scale limitations, and that’s the model India should follow.
Q. Any message to academicians and students preparing for new AI-driven opportunities?
A. The real gap is not students or courses, it is trainers. Almost every college today talks about AI and new technologies, but the challenge is whether faculty have real, up-to-date industry knowledge. AI is evolving daily, and even the industry is still learning, which makes standardised teaching difficult in the short term.
The most urgent need is to train the trainers, give faculty exposure, hands-on experience, and continuous upskilling so they can pass relevant knowledge to students. Without that, courses remain theoretical and disconnected from industry needs. Preparing educators is the key to preparing students for the AI future.
Q. Will AI eliminate coding jobs, and where does human value remain?
A. AI coding assistants are already changing how software is built. They can generate applications and content quickly, improving our productivity by over 40% while also standardising code and improving documentation. This will reduce hiring needs and impact repetitive, “bot-like” coding roles. However, AI will not build complex applications on its own; human judgment, architecture, and problem-solving remain essential. The biggest risk is for developers who resist adopting these tools. Those who adapt and use AI as a co-pilot will stay relevant; those who don’t will be the first to be replaced.
Q. How should professionals identify if they are falling behind?
A. The clearest sign is resistance to change. But this should be recognised through exposure, not judgment. It is important to avoid early or age-based bias. The issue isn’t age, it’s whether you stay current. Seeing what’s happening globally makes the gap obvious. When people realise AI is being used for things like drug discovery and advanced design not just chatbots, it becomes a wake-up call. Continuous learning and exposure to real-world innovation are the only ways to avoid falling behind.
Q. How do you personally balance leadership, learning, and life?
A. Weekdays are for my hobbies; weekends are for my obligations. Saturdays and Sundays are structured around family, parents, and everyday responsibilities. During the week, work is something I genuinely enjoy. Our organisation is mature, so my leadership team knows their responsibilities. I do formal reviews only two days a quarter; the rest of the time, they approach me if needed. That gives me flexibility to focus on learning, customers, and staying technically relevant. If I am advising customers, I need to truly stay up to date.

