Graduate researcher and TrendAI develop intelligent system that goes from attack scenario to validated detection logic in hours instead of days
Project at a Glance: Security teams at TrendAI, a business unit of Trend Micro and a global AI security leader, used to spend entire workdays writing detection rules needed to flag cloud-based cyberattacks. Each rule required expert knowledge, precise coding and meticulous validation. HongCheng Wei built FRM-Agent, an AI assistant that now completes this process in under two hours while maintaining expert-level quality. In tests with five security professionals, the system matched or exceeded human-authored detections 79 per cent of the time, enabling TrendAI to respond to new threats within hours instead of days.
When Every Hour Counts: The Challenge of Keeping Up with New Threats
As companies move their important systems to the cloud, security teams depend on detailed activity logs to detect attacks in real time. Each time hackers develop a new tactic, security teams must draft new detection rules — instructions that help systems identify suspicious activity among millions of recorded events.
TrendAI’s Security Analytics Engine uses a three-stage structure:
- Filters flag early warning signs in logs
- Rules link those signs to identify attacker actions
- Models combine Rules to detect complete, multi-step attacks
This layered approach is powerful enough to catch complex threats. However, creating a full set of these detection instructions typically took security experts one to two full workdays — a serious problem when new threats appear every day.
AI language models seemed like they could speed things up, but they had critical limitations. Even with careful guidance, these AI tools would generate detection instructions with small formatting errors that caused failures, or they’d misuse technical features they didn’t fully understand. Because AI models have a knowledge cutoff date, they would also make up information about brand-new cloud features, resulting in incomplete or broken detections. TrendAI needed a solution that combined AI speed with the reliability required for real-world security operations.
Academic Rigour Meets Security Innovation
TrendAI, founded in 1988, is a multinational cybersecurity leader specializing in enterprise security solutions and threat intelligence across distributed computing environments. The company maintains extensive global research infrastructure monitoring cybersecurity threats across enterprise, governmental and consumer sectors, with research emphasis on machine learning applications in threat detection and cross-platform security architectures.
HongCheng Wei brought a strong background in computer systems and data management to the challenge. Coursework in the University of Toronto’s Master of Science in Applied Computing (MScAC) program — including software engineering for ML-enabled systems and ubiquitous computing with LLMs — gave him hands-on experience with model-tool integration, structured outputs and human-in-the-loop design: skills directly applicable to building reliable AI agents.
“I was drawn to the opportunity to apply advanced large language models to concrete, high-stakes security challenges,” Wei said. “At TrendAI, I wasn’t just building a theoretical model; I was creating a system that directly empowers security engineers to stay ahead of rapidly evolving cloud threats.”
The partnership began after Trend hired MScAC alumnus Grigory Dorodnov, whose expertise and professionalism prompted the company to pursue deeper collaboration. Seeking AI talent, the team proposed several AI-focused projects to the program. The 2024 cohort marked the first year of the formal partnership, launching the AI Cyber Threat Research internship program.
Wei worked under TrendAI supervisors Deep Patel and Smile Thanapattheerakul, with academic supervision from Professor Shurui Zhou. The eight-month internship integrated students directly into production pipelines through weekly demos and monthly in-person collaboration sessions.
“The MScAC program brings us researchers who are not only academically rigorous but ready to tackle complex industrial problems from day one,” Patel said. “HongCheng didn’t just observe our workflow; he owned a critical piece of our threat research strategy.”
Building Intelligence with Guardrails
FRM-Agent’s architecture addresses automation through a two-layer multi-agent system combining LLM reasoning with structured validation. The Filter Generation Agent focuses on individual atomic signals (SAE Filters), while the Rule-Model Generation Agent decomposes complex attack scenarios into attacker actions, orchestrates filter generation for each action, and assembles them into SAE Rules and Models with branching and sequencing.
The workflow begins conversationally: users describe an attack scenario, and the system generates a visual tree-structured FRM plan showing how Filters, Rules and Models will compose. Security experts can regenerate portions, edit nodes, or approve the structure before specialized sub-agents generate detection code.
Key innovations made this workflow production ready. The SAE Filter Compiler transforms freeform LLM outputs into validated Filters using structured tool-calling, enforcing schema compliance and naming conventions without additional training. Real-time retrieval via Model Context Protocol servers and custom AWS documentation tools addresses knowledge cutoffs — fetching current API information rather than relying on outdated training data.
The technical stack leverages multiple state-of-the-art components: TrendAI’s AI endpoint (a company-managed service exposing leading LLMs from OpenAI, Anthropic, Google and others through a unified, policy-governed interface with support for web search, tool calling, and structured outputs), LangChain and LangGraph for orchestration, Model Context Protocol servers and custom AWS documentation tools for real-time retrieval, custom Botocore tools for precise service metadata, Redis-backed inter-agent communication for scalability and reliability, Elasticsearch with a redacted CloudTrail dataset for experiments and example log retrieval, and a React-based FRM Agent UI for chat interface, graph prototyping, and human editing.
Throughout generation, the system applies sophisticated prompt engineering: deep-research-style clarification, ReAct planning, self-ask refinement, reflective critique, context isolation, and stepwise retry-and-validate loops.
Transforming Daily Operations: The Results
The quantitative results demonstrate substantial operational improvement. In evaluation against 286 expert-authored Filters, FRM-Agent reproduced or exceeded expert quality in 79 per cent of cases within just two attempts. Average generation time per Filter clocked in at approximately three minutes, a pace impossible for human experts while maintaining accuracy.
The end-to-end impact proved even more dramatic. In beta testing with five TrendAI security experts, complete authoring time — from initial threat investigation through to submitted pull request with validated FRM detection logic — dropped from an average of eight hours to roughly 100 minutes. This 80 per cent reduction translates directly to faster threat coverage for TrendAI’s customers, enabling the company to respond to emerging cloud attacks within hours rather than days.
Beyond raw speed, the system improved consistency and reliability. Syntax correctness increased markedly due to the integrated compiler and automated validation pipeline. Security engineers reported higher confidence in generated detections, smoother iteration cycles when refinements were needed, and better utilization of SAE backend features that previously required deep system knowledge to use correctly.
The visual FRM graph prototyping workflow emerged as a core innovation rather than a supplementary feature. The AI-human collaboration loop allows users to co-design the FRM graph structure, regenerate specific nodes or subgraphs, and make targeted edits before committing to Filter generation. Rather than committing to a complete generation cycle and discovering issues later, experts validate the detection strategy at the planning stage, make targeted adjustments, and regenerate only the portions that need refinement. This made the system both faster and safer by ensuring intent alignment and comprehensive coverage before deep generation. This collaborative approach balanced automation speed with expert oversight, ensuring detection quality while accelerating development.
“The FRM Agent fundamentally changes the economics of detection engineering,” Patel said. “By reducing the time-to-publish from days to hours, we aren’t just working faster; we’re freeing our experts to focus on high-level threat analysis rather than syntax and boilerplate code. It acts as a force multiplier, allowing our team to scale our defense capabilities alongside the exploding complexity of cloud environments without burning out our engineers,” the Senior Threat Researcher at TrendAI added.
The evaluation methodology was comprehensive, combining multiple dimensions: generation time, token cost, syntax validity through automated checks, log-matching parity comparing generated Filters against expert ground truth on real CloudTrail data, and a checklist-based LLM judge comparison that assessed alignment with expert-authored detections. Compared with baseline attempts using freeform LLM authoring without the structured toolchain, FRM-Agent showed fewer hallucinations, stronger adherence to schema requirements, and higher reproducibility across multiple generation attempts.
A Culture Built on Innovation and Mentorship
TrendAI has cultivated a distinctive approach to talent development that extends beyond conventional internships. The company’s Research Incubator, launched in 2022, was designed to address cybersecurity’s significant talent gap — particularly the challenge that even junior positions typically require knowledge spanning diverse technology domains.
“We believe the incubator’s approach will sharpen our interns’ research and analytical mindsets, which is the most important tool to master in tackling the ever-changing cybersecurity threat landscape,” explained Vincent Lee, Senior Manager of Vulnerability Research at TrendAI’s Zero Day Initiative.
The incubator models post-secondary coursework, easing students from classroom learning into working with real-world security vulnerabilities through realistic lab exercises and supplementary materials.
Collaboration and innovation stand as core values throughout TrendAI. The company emphasizes breaking down silos, treating everyone with respect and empathy, and pushing the limits of what’s possible to create new value. This culture extends to how the company integrates academic research into its workflow: not as peripheral experiments, but as projects that feed directly into production systems.
The partnership with the University of Toronto’s MScAC program marked a strategic expansion. After submitting AI-focused project proposals, TrendAI officially launched its AI Cyber Threat Research internship program, bringing together students’ rigorous academic training with the company’s practical product challenges.
For Wei, the experience proved transformative both technically and professionally. Building tool-using multi-agent systems that balance autonomy with guardrails required solving practical problems that extended beyond academic exercises. Structured outputs, schema validation, and targeted retrieval transformed plausible-sounding ideas into detections ready for production deployment.
Wei worked across research and product teams, translating high-level requirements into measurable experiments and turning results into practical UI and API improvements.
What This Means for Security Engineering
FRM-Agent establishes a template for deploying AI in mission-critical domains where errors carry serious consequences. The system demonstrates that production-grade AI assistance becomes achievable when guardrails, structured outputs, and human-in-the-loop collaboration are designed into the architecture from the start — not added as afterthoughts.
For TrendAI, the immediate value is operational: security engineers can now respond to emerging threats with dramatically reduced latency while maintaining quality standards. The modular architecture enables incremental scaling, allowing the team to extend coverage to Azure and Google Cloud platforms, incorporate new detection schemas, or apply similar approaches to other security domains like IPS rules without redesigning the entire pipeline.
The broader cybersecurity industry faces similar challenges. As cloud attack surfaces grow more complex and new APIs emerge faster than human teams can track, automated detection authoring transitions from convenience to essential infrastructure. FRM-Agent’s success validates that structured-output approaches — transforming freeform LLM outputs into schema-compliant, production-ready artifacts — can bridge the gap between AI’s speed and the reliability requirements of security systems.
This work contributes to emerging best practices for trustworthy AI in compliance-sensitive contexts. The compiler-based guardrails, retrieval augmentation to overcome knowledge cutoffs, and visual collaboration interface offer patterns applicable beyond security: to infrastructure-as-code generation, policy authoring, or any domain requiring both automation and precision.
TrendAI’s commitment to academic partnerships continues. The success of this collaboration validates the model and opens opportunities for future MScAC students to explore advanced agent architectures, improved evaluation frameworks for AI-generated security detections, and tighter integration with TrendAI’s broader security platform.
As cloud adoption accelerates and threat actors develop increasingly sophisticated techniques, the race between attackers and defenders intensifies. FRM-Agent represents a meaningful step toward ensuring defenders can maintain pace: not by working harder, but by working smarter through carefully designed AI assistance that amplifies rather than replaces human expertise.
By the Numbers
- 286 expert-authored filters used for evaluation baseline
- 79 per cent of generated filters matched or exceeded expert quality within two attempts
- ~3 minutes average generation time per filter
- 80 per cent reduction in end-to-end authoring time (8 hours to ~100 minutes)
- 5 TrendAI security experts participated in beta trials
Contact: For media inquiries, please contact MScAC Partnerships at partners@mscac.utoronto.ca. For more information about TrendAI, visit www.trendmicro.com.