Artificial Intelligence in Physical Activity, Rehabilitation, and Wellbeing

This week’s AWRC Lunch and Learn was delivered by Dr. Mostapha Kalami Heris of the School of Engineering and Built Environment, and explored how artificial intelligence (AI) can support research and practical applications in physical activity, rehabilitation, and wellbeing. The session provided an applied overview of modern AI capabilities and discussed how these methods can address common challenges in health research. The aim was not only to introduce AI concepts but also to encourage researchers to think critically about when and how these technologies can be meaningfully integrated into wellbeing and rehabilitation research.
The talk began by highlighting several global trends that make AI increasingly important in healthcare and wellbeing sciences. One major factor is the rapid aging of populations worldwide, which increases demand for healthcare services and long-term rehabilitation. At the same time, healthcare systems are experiencing workforce shortages, with global estimates suggesting significant gaps in the availability of healthcare professionals. AI technologies offer a potential way to support clinicians and researchers by automating certain analytical tasks and enabling new forms of digital monitoring.
Another key driver for the use of AI is the rapid growth of health-related data. Wearable devices, smartphones, medical imaging systems, and digital health records continuously generate large amounts of information about human behavior and physiological activity. Each movement captured by a wearable sensor or each interaction with a digital health platform contributes new data points. AI methods provide tools to analyze this complex and high-volume data, allowing researchers to detect patterns that would otherwise remain hidden.
Dr. Kalami Heris described a typical pipeline for integrating AI into wellbeing research. The process begins with a clearly defined research problem and the collection of relevant data. Data may come from wearable devices, clinical measurements, or observational studies. Once data is collected, it often requires significant preprocessing to improve quality and structure before analysis. AI models can then be developed to identify patterns, generate predictions, or support decision-making. Finally, the results may be used to inform interventions, decision-support tools, or policy considerations in healthcare systems.
AI systems are particularly effective at identifying patterns in complex datasets and processing large volumes of information. They can support tasks such as activity recognition from wearable sensors, prediction of clinical outcomes, or detection of behavioral patterns over time. AI can also help automate repetitive workflows and support clinicians through decision-support systems. These capabilities allow researchers to move beyond manual analysis and explore richer insights from longitudinal health data.
Despite its capabilities, AI has important limitations that must be acknowledged. Current AI systems cannot replace human judgment in clinical settings, and they lack the reasoning, empathy, and contextual understanding required for many healthcare decisions. AI models may also produce incorrect outputs or hallucinations, particularly when applied outside the conditions for which they were trained. For these reasons, AI should be viewed as a support tool rather than a replacement for professional expertise.
The talk also addressed several common reasons why AI projects fail. One major challenge is population bias, where models trained on one population may not generalize well to other groups. Differences in age, geography, lifestyle, and health conditions can significantly affect model performance. Another issue is data drift, where social or behavioral patterns change over time, making previously trained models less reliable. These factors highlight the need for ongoing validation and monitoring of AI systems.
Dr. Kalami Heris illustrated the importance of careful model validation through the concept of the “accuracy trap.” In some cases, AI systems achieve high statistical accuracy while learning irrelevant patterns in the data. For example, an image classification system might rely on contextual features, such as background details, rather than medically relevant signals. Similar issues have occurred in other domains where models detect environmental cues instead of the intended object or phenomenon. These examples demonstrate why explainable AI and careful evaluation are essential.
Several promising opportunities for AI in wellbeing research were highlighted. These include continuous health monitoring using wearable technologies, decision-support systems that assist clinicians and researchers, and personalized interventions tailored to individual patients. These applications enable more responsive and adaptive healthcare systems, where interventions can be adjusted based on real-time data and individual needs.
One emerging research direction discussed in the talk is the concept of personal foundation models. These models combine large-scale machine learning architectures with individual-level data streams collected from devices such as smartphones, wearable sensors, and environmental monitors. By integrating personal data with large pretrained models, researchers may be able to create highly personalized health monitoring systems that detect behavioral changes, predict risks, and provide individualized recommendations while maintaining strong privacy and governance frameworks.
The presentation also reviewed several major paradigms used in artificial intelligence. Supervised learning is commonly used for prediction tasks where labeled datasets are available, such as predicting health outcomes or classifying activities from sensor data. Unsupervised learning focuses on discovering hidden structures or behavioral clusters in large datasets. Reinforcement learning enables systems to learn adaptive strategies through interaction with an environment and feedback signals. In addition, generative AI and foundation models provide new capabilities such as reasoning, content generation, and flexible data analysis.
The talk concluded with practical guidance for researchers interested in applying AI methods. Successful integration of AI requires high-quality data, clear research questions, and careful consideration of ethical and governance issues. Researchers should evaluate the risks associated with automated decisions and ensure that human oversight remains part of the system. By matching research problems with appropriate AI methods and maintaining transparency and validation, AI can become a powerful tool for advancing wellbeing and rehabilitation research.