MScAC talks
Stay tuned for the next season of MScAC Talks!
The seminar series that brings the academic community and industry together to talk about impactful applied research. The University of Toronto’s MScAC program is located in one of the fastest-growing tech hubs in the world.
MScAC Talks is a yearly speaker series that bridges the academic and professional worlds, highlighting impactful applied research from September to April. The series invites industry and academic leaders to share work that inspires both the university and broader professional community, sparking discussions that deepen appreciation for research and its real-world impact.
By bringing together students, alumni and industry professionals, MScAC Talks fosters meaningful connections and knowledge exchange at the intersection of research and application.
Upcoming Talks
Nathan Killoran
- Tuesday, January 6, 2026
- 11:00 a.m.
- Hybrid
Full-stack photonic quantum computing at Xanadu
Quantum computers are a rapidly developing new technology which show great promise for tackling important scientific problems. Nathan Killoran will share how Xanadu is engineering its platform end-to-end, and what it means to scale a quantum company in Canada at a moment when global competition is accelerating. He’ll explore how its photonic hardware and rich software ecosystem have become essential infrastructure for quantum developers worldwide.
Bio
Nathan leads the Software division at Xanadu, a Toronto company building a full-stack quantum computing ecosystem. His team has built PennyLane, one of the top quantum programming SDKs in the industry, Catalyst, a hybrid compiler for quantum-classical programs, and is now focused on building a utility-scale software stack for fault-tolerant quantum computing. Nathan received a PhD in Physics from the University of Waterloo, a master’s degree in mathematics from the University of Toronto, and a B.Sc. in Theoretical Physics from the University of Guelph.
Up next
- Tony Savor (Index Exchange) – Jan. 13, 2026, 11:00 a.m.
- Farhan Thawar, Andrew McNamara (Shopify) – Feb. 3, 2026, 11:00 a.m.
- Alán Aspuru-Guzik (University of Toronto) – Mar. 31, 2026, 11:00 a.m.
Past Talks
Griffin Lacey
- Tuesday, November 25, 2025
- 11:00 a.m.
- Online
Accordion Content
NVIDIA Platform for Accelerated Computing
Graphics Processing Units (GPUs) have become foundational for today’s generative AI era. This talk will discuss the role of hardware accelerators like GPUs in modern AI/ML workloads, including the importance of the broader software stack and end-to-end AI platforms, and how AI/ML infrastructure scales from embedded systems to supercomputing levels to train LLMs with trillions of parameters.
Griffin Lacey leads the Solutions Architect team for NVIDIA in Canada. His team supports customers leveraging the NVIDIA platform to solve large-scale problems across AI and HPC. Prior to NVIDIA, Griffin was a machine learning researcher for Google, and worked as an engineer in the embedded computing space at Connect Tech. He holds Bachelor and Master of Engineering degrees from the University of Guelph, where his research focused on the optimizing deep learning on different accelerated computing platforms.
Ishtiaque Ahmed
- Thursday, November 20, 2025
- 2:00 p.m.
- Online
Accordion Content
AI Ethics Bottom-Up: Formation of Ground Truth and Global AI Politics
Artificial intelligence systems rely on massive datasets to learn what is “true,” but the process of creating that truth is far from neutral. This talk reveals the hidden infrastructure of data annotation that underpins every supervised model: millions of human annotators, often working under precarious contracts across the Global South, who interpret ambiguous data into the labels that machines learn from. Each annotation encodes social, cultural, and linguistic assumptions that ultimately shape a model’s behaviour, bias, and performance. By examining how annotators’ decisions and working conditions determine what counts as accurate or ethical data, I argue that AI ethics must move beyond abstract principles toward an understanding of the material and geopolitical systems that produce ground truth. From labelling hate speech to detecting misinformation, these processes expose how technical pipelines reproduce global hierarchies of labour, language, and legitimacy. Reframing annotation as a core computational process allows us to build AI that is not only robust and reliable but also socially accountable and globally aware.
Syed Ishtiaque Ahmed is an Associate Professor of Computer Science at the University of Toronto and the founding director of the ‘Third Space’ research group. His research interests lie at the intersection of Human-Computer Interaction (HCI) and Artificial Intelligence (AI). Ahmed received his PhD and Master’s from Cornell University in the USA, and his Bachelor’s and Master’s from BUET in Bangladesh. Over the last 15 years, he has studied and developed successful computing technologies with various marginalized communities in Bangladesh, India, Canada, the USA, Pakistan, Iraq, Turkey, and Ecuador.
He has published over 100 peer-reviewed research articles and received multiple best paper awards in top computer science venues, including CHI, CSCW, ICTD, and FaccT. Ahmed has received numerous honours and accolades, including the International Fulbright Science and Technology Fellowship, the Intel Science and Technology Fellowship, the Fulbright Centennial Fellowship, the Schwartz Reisman Fellowship, the Massey Fellowship, the Connaught Scholarship, the Microsoft AI & Society Fellowship, the Google Inclusion Research Award, and the Facebook Faculty Research Award. His research has also received generous funding support from all three branches of Canadian tri-council research (NSERC, CIHR, SSHRC), the USA’s NSF and NIH, and the Bangladesh government’s ICT Ministry. Ahmed was named a “Future Leader” by the Computing Research Association in 2024.
Yonatan Kahn
- Tuesday, November 11, 2025
- 11:00 a.m.
- Hybrid
Accordion Content
Machine Learning from the Perspective of Physics
Neural networks are the backbone of artificial intelligence and machine learning systems. Despite the immense success of neural networks at a variety of real-world problems, the theory of deep (multi-layer) neural networks is still in its infancy. There are many tantalizing analogies between neural networks and situations we encounter in all branches of physics: the interactions of many entities which give rise to simple collective behaviour are strongly reminiscent of statistical mechanics and condensed matter physics, and the data structures encountered in physics may provide tractable models for how neural networks learn from complex real-world data. This talk will explore the perspective that physics may bring towards understanding neural network architectures and algorithms.
Yonatan Kahn is an assistant professor in the Department of Physics. He is a theoretical physicist whose research focuses on dark matter and its detection strategies, as well as on the theory of machine learning from a high-energy physics perspective. Prior to joining the Faculty of Arts & Science, Kahn was an assistant professor at the University of Illinois Urbana-Champaign and held postdoctoral positions at the Kavli Institute for Cosmological Physics (KICP) at the University of Chicago and Princeton University before that.
Professor Kahn received his PhD in 2015 from the Massachusetts Institute of Technology. He holds degrees in music, physics, and mathematics from Northwestern University (BA, BMus 2009) and completed Part III of the Mathematical Tripos with Distinction at the University of Cambridge in 2010, supported by a Churchill Scholarship. In 2016, he received the American Physical Society’s J.J. and Noriko Sakurai Dissertation Award in Theoretical Particle Physics, and in 2022, he was selected as a Kavli Frontiers of Science Fellow by the National Academy of Sciences in the US.
Brandon Rufino
- Tuesday, October 21, 2025
- 11:00 a.m.
- Hybrid
Accordion Content
AI at Scale: Transforming Drug Discovery and Development at Sanofi
In an industry where the journey from discovery to approval often takes 10–15 years and costs exceed $2.6 billion per successful medicine, Sanofi’s Digital R&D organization is reimagining what’s possible with AI. Our AI Centre of Excellence (AI CoE), located here in Toronto, is creating competitive advantages across the entire value chain — shortening timelines, lowering costs, and increasing the probability of success in ways that fundamentally shift the economics of pharmaceutical R&D.
In research, foundation models and graph knowledge bases are helping us uncover targets, optimize their profiles, and even parse out highly heterogenous diseases into correct subtypes — opening new avenues for therapeutic innovation. In development, we are building multimodal AI models that integrate imaging and digital biomarkers, reshaping how we measure patient outcomes. Also, we are building and fine-tuning clinical language models trained on health records and clinical trial data to improve our ability to predict safety and efficacy.
This end-to-end AI ecosystem is not only delivering operational efficiency, but also enabling deeper biological understanding and more personalized approaches to treatment — bringing us closer to the miracles of science for patients worldwide. In this talk, we will look at practical examples at how Sanofi’s AI CoE is achieving this, and how the early career at University of Toronto is playing a crucial contributing role in helping us chase the miracles of science.
Brandon Rufino leads Sanofi’s AI efforts in clinical development as Director of AI for Clinical Trial Design & Optimization. Based in Toronto, he is part of Sanofi’s global Digital R&D AI Centre of Excellence, where he has built multidisciplinary teams and scalable platforms that integrate real-world data, knowledge graphs, and advanced machine learning approaches to transform the way medicines are developed. Brandon has spearheaded initiatives ranging from AI-driven patient-finding and indication prioritization to trial optimization and burden reduction. His work bridges academic collaborations, cutting-edge research, and industrial application — helping Sanofi harness the power of AI across discovery and development to accelerate breakthrough medicines for patients worldwide.
Igor Gilitschenski
- Thursday, October 16, 2025
- 4:00 p.m.
- Virtual
Do Androids Dream of Electric Sheep? A Generative Paradigm for Dataset Design
Traditional approaaches for autonomy and AI robotics typically focus either on large-scale data collection or on improving simulation. Although most practitioners rely on both approaches, they are still largely applied in separate workflows and viewed as conceptually unrelated. In this talk, I will argue that this is a false dichotomy. Recent advances in generative models have enabled the unification of these seemingly disparate methodologies. Using real-world data to build data generation systems has led to numerous advances with significant impact in robotics and autonomy, going beyond pure distillation approaches. Unifying creation and curation enables sophisticated automatic labeling pipelines and data-driven simulators. I will present some of our work following this paradigm and outline several basic research challenges and limitations associated with building systems that learn with generated data.
Igor Gilitschenski is an Assistant Professor of Computer Science at the University of Toronto where he leads the Toronto Intelligent Systems Lab. Previously, he was a (visiting) Research Scientist at the Toyota Research Institute. Dr. Gilitschenski was a Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab and the Distributed Robotics Lab (DRL). There he was the technical lead of DRL’s autonomous driving research team. He joined MIT from the Autonomous Systems Lab of ETH Zurich where he worked on robotic perception, particularly localization and mapping. He obtained his doctorate in Computer Science from the Karlsruhe Institute of Technology and a Diploma in Mathematics from the University of Stuttgart. His research interests involve developing novel robotic perception and decision-making methods for challenging dynamic environments. His work has received multiple awards including best paper awards at the American Control Conference, the International Conference of Information Fusion, and the Robotics and Automation Letters.
Meredith Franklin
- Thursday, September 25, 2025
- 4:00 p.m.
- Hybrid
Translating Applied Computing Research into Environmental and Public Health Impact
AI and machine learning are playing an increasingly critical role in advancing research at the intersection of environmental science and public health. With the growing availability of high-resolution and high dimensional environmental data, from satellite remote sensing and atmospheric models to low-cost sensor networks and wearable health monitors, there is a unique opportunity to contribute to public good by developing tools that transform complex data and computing into actionable insights.
This talk will highlight real-world applications of AI to environmental exposure assessment and health impact modeling, with a focus on challenges such as climate change, air pollution, wildfire smoke, and emissions from oil and gas operations. Using examples from ongoing interdisciplinary research, we will explore how spatiotemporal machine learning models, deep learning for image analysis, and clustering tools are being used to detect pollution hotspots, assess health risks, and guide both regulatory and public health interventions.
Equally important is the translation of these research findings beyond the academic setting, to communities that are historically overburdened by environmental hazards; to agencies responsible for environmental health protection; and to industry sectors where transparent, data-driven accountability is essential.
Meredith Franklin is an Associate Professor in the Department of Statistical Sciences and holds a joint appointment with the School of the Environment. Before coming to the University of Toronto she was faculty at the University of Southern California in Los Angeles (2010-2021).
Trained in mathematics, statistics, and environmental health, her interdisciplinary research focuses on quantifying environmental exposures through spatiotemporal statistical and machine learning approaches to assess their impacts on health outcomes. She has been a leader in developing methods to use remote sensing data for exposure assessment for a variety of environmental factors including air pollution, wildfires, flaring from oil and gas, artificial light at night, and greenspace. She has also conducted several highly cited population-based epidemiological studies of the association between air pollution and health. She received a B.Sc. in Mathematics from McGill University and a Ph.D. from the Harvard T.H. Chan School of Public Health.