Designing AI for the Real World: My Research Vision

How can we design AI models that are not only accurate, but also robust, explainable, and respectful of data privacy? That is the central question driving my research agenda — rooted in the fundamentals of machine learning, tensor algebra, and programming languages, yet deeply connected to real-world challenges in healthcare, neuroscience, and population health.

From Foundations to Impact

My work is fundamentally methodological. I develop new models and data representations that can handle the complexity and heterogeneity of real-world data. While the applications I work on may shift over time — from neurology to public health — my core focus remains stable: building mathematically grounded AI systems that are transparent, modular, and capable of learning from distributed, sensitive datasets.

Federated Learning & Tensor Representations

One of my key research lines focuses on federated learning and tensor modeling. I developed the Block-Term Tensor Regression (BTTR) model and its federated extension FBTTR, designed to analyze sensitive, decentralized data such as health records. These models offer strong guarantees around convergence, data separation, and interpretability — and have been successfully applied to problems like cardiac risk prediction and multiple sclerosis progression.

In my role as scientific coordinator of the “Real-World Evidence” use case within the Flanders AI Research Program, I lead a multidisciplinary consortium involving researchers from KU Leuven, UHasselt, UGent, and UAntwerpen. Together, we develop methods for scaling up real-world datasets (e.g., EHRs, ECGs, MRI) and extracting personalized insights into disease progression — without compromising data privacy.

Programming Languages & Formal Methods

Another long-standing line of inquiry in my work is programming language theory, particularly algebraic type and effect systems. My goal here is to explore how formal type systems can support correctness, transparency, and optimization in AI workflows. I’ve collaborated with researchers like Tom Schrijvers and Matija Pretnar on algebraic subtyping for typed effect systems, and I currently supervise master’s theses on compiler design and bytecode optimization.

This line of work is more than theoretical: it helps lay the groundwork for AI systems that are not only effective but also verifiable and trustworthy.

Population Health & Policy-Relevant AI

A newer but rapidly growing part of my agenda focuses on real-world evidence (RWE) and population health management (PHM). Here, I study how to model administrative health data using causal inference and federated AI, aiming to bridge the gap between technical model development and actionable policy insights.

Our current work explores three critical PHM challenges:

  • Estimating disease prevalence using medication-based pseudo-pathologies;
  • Modeling intervention outcomes and health trajectories across populations;
  • Investigating inequality through personalized risk analysis that integrates socioeconomic data.

From a technical perspective, we’re extending BTTR to handle vertically partitioned and aggregated data, and building a sandbox environment for controlled federated learning experiments. These efforts aim to produce domain-agnostic recommendations for responsible AI in health systems.

Together with colleagues from KU Leuven and UHasselt, I am co-promoter on a new FWO Senior Research Project focused on federated learning in healthcare, and we are preparing a larger FWO Strategic Basic Research (SBO) proposal centered on federated learning for population health (FL4PHM).

Positioned for Collaboration

After my PhD, I chose to join the Biomedical Data Sciences group at UHasselt — not solely to focus on biomedical applications, but because of the unique combination of methodological depth and societal relevance. This environment allows me to test abstract models (tensor factorizations, type systems, causal models) against diverse datasets and in highly interdisciplinary settings.

As part of this group, I help develop an open-source Federated Learning Toolkit that enables GDPR-compliant AI for healthcare. My contribution sits at the intersection of algorithmic innovation and ethical, privacy-aware data science.

Complementary Expertise & Next Steps

My expertise complements key domains within UHasselt’s Data Science Institute (DSI), aligning with core themes such as federated learning, causal AI, and database technology. I bring experience in project design, scientific coordination, and societal valorization — from foundational work to system-level applications.

Currently, I’m entering an intensive publication phase. Several papers are in preparation, focused on federated tensor models and effect-based compiler design. If successful, these will strengthen my academic profile and lay the groundwork for a future ERC grant.

From Brain Interfaces to Broader Models

My academic journey has always balanced fundamental modeling with real-world application. From early research in HCI and networking (which led to a third place at the ACM CHI Student Design Competition) to an award-winning master’s thesis in type systems, and a PhD on brain decoding using tensor models — each step reflects a consistent drive to develop generalizable, mathematically sound AI techniques.

Even outside academia, this ambition has shaped my trajectory: I applied to become an astronaut during ESA’s 2021 call — an experience that reaffirmed my desire to pursue interdisciplinary, impact-driven research here on Earth.

As a postdoctoral researcher at UHasselt, I continue to build bridges between data science, health systems, and fundamental theory. I believe this kind of work — technically deep, societally grounded, and rigorously interdisciplinary — is what modern AI needs.