Graduate researcher and Next Pathway develop new standards in privacy-first infrastructure automation
Project at a Glance: What once took IT teams hours, now happens in minutes. Rushikesh Reddy Guttapati embedded artificial intelligence into Next Pathway‘s DevOps pipeline, automating everything from virtual machine (VM) setup to security threat detection — all while keeping sensitive enterprise data locked down on-premises. The result: tasks that previously took one to two hours now finish in two to three minutes, and security alerts arrive in under 60 seconds.
When Hours Become Minutes: The Infrastructure Bottleneck
Picture an IT professional’s day: creating virtual machines by hand, scanning dashboards for performance issues, and manually combing through login records to spot security threats. These aren’t occasional tasks; they’re daily rituals that eat up valuable time and leave room for human error. As companies grow, these manual processes create bottlenecks that slow everything down, from launching new applications to responding to security incidents.
Next Pathway knows this challenge well. The Toronto-based company helps enterprises escape outdated legacy systems and move to modern cloud platforms. Its SHIFT™ product platform already automates the complex work of translating Structured Query Language (SQL) code and Extract, Transform, Load (ETL) pipelines for the cloud. But internally, the company faced the same problem their clients do: how to make infrastructure management faster and smarter without sacrificing the one thing enterprises can’t compromise on — data security.
Off-the-shelf AI for IT Operations (AIOps) tools handle basic automation, but they often miss the nuances: intelligent scheduling that adapts to real-world patterns, instant detection of subtle security anomalies, and, most critically, keeping sensitive data within company walls. The question driving this project was deceptively simple: could AI running entirely on Next Pathway’s own servers match the capabilities of powerful cloud-based AI services without exposing private company data?
Academic Expertise Meets Industry Innovation
“From high school, I was deeply interested in computer science, especially AI,” said Guttapati, a graduate of Amrita Vishwa Vidyapeetham. “In my first year of undergrad, I decided to pursue a master’s program that combines research and hands-on technical experience. The Master of Science in Applied Computing (MScAC) program at the University of Toronto stood out as a perfect fit.”
Next Pathway formed a partnership with the MScAC program. Guttapati, a graduate student in the computer science concentration, worked under Inder Dhillon, Next Pathway’s head of technology operations, and academic supervisor Professor Shurui Zhou.
“At Next Pathway, we believe that innovation begins with curiosity and a commitment to continuous learning,” said Dhillon.
Located in downtown Toronto’s Scotia Plaza, Next Pathway is led by U of T alumni and fosters a culture that encourages self-starters and innovative thinking.
“Our partnership with U of T’s MScAC program reflects this same spirit,” said Clara Angotti, president and chief operating officer of Next Pathway. “These academic collaborations bring together bright, ambitious talent with seasoned engineers, creating an environment where excellent mentorship can generate breakthroughs. It’s a powerful way to invest in the next generation of innovators while strengthening our culture of growth and discovery.”
Building a Privacy-First AI Solution
Guttapati’s research targeted three core DevOps areas. He compared locally deployed language models (LLaMA-2-7B and GPT-J) with third-party services like OpenAI’s GPT-4 and Anthropic’s Claude, evaluating trade-offs between accuracy, privacy and cost.
For VM provisioning and backup scheduling, he built an automated system that creates virtual machines based on user roles and integrates AI with Nakivo’s backup platform to optimize job scheduling. The system analyzes user requirements, provisions appropriate resources and schedules backups during off-peak hours without human intervention.
Dynamic resource monitoring represented the second innovation. By enhancing Prometheus monitoring with AI, the system now scales VM resources automatically based on predicted demand patterns, identifying trends and proactively adjusting capacity before bottlenecks occur.
The third component addressed security through log analysis for anomaly detection. AI models continuously scan authentication logs, flagging unusual patterns such as late-night access attempts or logins from unexpected geographic regions, generating alerts in under a minute.
A critical constraint shaped the research approach: enterprise data privacy. Unlike consumer applications, enterprise DevOps systems handle sensitive information about infrastructure, user behaviour and security vulnerabilities. Any AI solution must protect this data while delivering automation benefits.
Transforming Daily Operations: The Results
VM provisioning time dropped from one to two hours to two to three minutes — a 95 per cent reduction that eliminated a major bottleneck. What previously required extensive manual configuration now happens automatically, freeing IT teams to focus on strategic initiatives.
Resource monitoring transformed from a time-consuming daily chore into a proactive, automated system. An employee who previously spent an hour each day manually checking for capacity bottlenecks no longer performs these checks. Instead, the AI system sends alerts when resources reach 80 per cent capacity, enabling pre-emptive scaling before performance degrades.
Security anomaly detection achieved near-instantaneous response times, flagging suspicious login patterns and delivering alerts in under a minute. This rapid detection significantly reduces the window of vulnerability compared to manual log review.
The comparative analysis revealed nuanced trade-offs. Third-party models achieved about 92 per cent accuracy while embedded models reached about 84 per cent. However, embedded models incur no per-token fees and keep all data on-premises, while third-party services require external data transfer.
“These results are nothing short of game-changing for our business,” said Dhillon. “Speed to market is an important part of Next Pathway’s value to our customers and this research project delivers on this promise.”
Next Pathway uses embedded models to ensure complete data sovereignty. For backup scheduling, where higher accuracy matters and data sensitivity is lower, the system leverages third-party Large Language Models (LLMs). This hybrid approach maximizes both security and performance.
A Culture of Innovation and Mentorship
Next Pathway, founded in 2006 and headquartered in Toronto, has earned a reputation for innovation in cloud migration technology. The company focuses on solving technically challenging problems while maintaining high-quality standards.
“At Next Pathway, we chose to partner with the University of Toronto because we believe in nurturing the next generation of technologists,” said Angotti. “We also place a high value on higher education, and the MScAC program is best in class. This allows Next Pathway to continue to be the leader in providing an accelerated and automated path to the cloud.”
“During my internship at Next Pathway, I faced several challenges that helped me grow both technically and personally,” said Guttapati. “Understanding the problem and getting a real feel for the company’s systems took time. There were also moments when I got stuck with technical blockers, but my industry mentor, Inder Dhillon, guided me through each of them with patience and great insights.”
The hybrid work model — three days in the Toronto office and two days remote — facilitated both collaborative whiteboarding sessions and focused development time. This flexibility, combined with regular presentations to executive leadership, gave Guttapati visibility across the organization.
What This Means for Enterprise AI
Next Pathway has enhanced its SHIFT™ product platform with AI models, demonstrating continued commitment to AI-driven innovation. Guttapati’s research on privacy-first AI deployment provides a foundation for future enhancements across Next Pathway’s product suite.
The project’s methodology — comparing locally deployed and third-party AI models across multiple operational contexts — offers a template for other enterprises navigating similar automation challenges. As organizations increasingly adopt AI, data sovereignty becomes paramount.
The emphasis on privacy-first design aligns with growing regulatory requirements and customer expectations around data protection. As cloud migration accelerates across industries, this research demonstrates that enterprises don’t need to sacrifice privacy for performance.
“As Next Pathway serves the largest companies across the globe, our infrastructure needs to be superior,” said Dhillon. “We will continue to build on this successful work and expand our thinking into new areas. The opportunities are limitless.”
By the Numbers
- 95 per cent reduction in VM provisioning time (from one to two hours to two to three minutes)
- Alerts generated in under one minute for security anomalies
- 80 per cent capacity threshold for automated scaling alerts
- 92 per cent accuracy for third-party AI models; 84 per cent for embedded models
- Three core DevOps areas enhanced: provisioning, monitoring, and security
Contact: For media enquiries, please contact MScAC Partnerships at partners@mscac.utoronto.ca. For more information about Next Pathway, visit www.nextpathway.com.