Skip to main content
Mathis Group Logo

We work at the intersection of computational neuroscience and machine learning

We are interested in understanding behavior in computational terms and in reverse-engineering the algorithms of the brain.

Research Directions

We develop machine learning tools for behavioral and neural data analysis, and conversely try to learn from the brain to solve challenging machine learning problems.

Machine Learning for Behavior Analysis

We strive to develop tools for the analysis of animal behavior. Behavior is a complex reflection of an animal's goals, state and character. Thus, accurately measuring behavior is crucial for advancing basic neuroscience, as well as the study of various neural and psychiatric disorders. However, measuring behavior (from video) is also a challenging computer vision and machine learning problem.

Learn more
DeepLabCut
DLC2action
AmadeusGPT

Brain-inspired Motor Skill Learning

Watching any expert athlete, it is apparent that brains have mastered to elegantly control our bodies. This is an astonishing feat, especially considering the inherent challenges of slow hardware and the sensory and motor latencies that impede control. Understanding how the brain achieves skilled behavior is one of the core questions of neuroscience that we tackle through Modeling using Reinforcement Learning, and Control Theory.

Learn more
DMAP
Lattice
MyoChallenge 2022

Task-driven Models of Proprioception

We develop normative theories and models for sensorimotor transformations and learning. Work in the past decade has demonstrated that networks trained on object-recognition tasks provide excellent models for the visual system. Yet, for sensorimotor circuits this fruitful approach is less explored, perhaps due to the lack of datasets like ImageNet.

Learn more
proprioceptive illusion

Featured Publications

Recent advances from our lab

View All
2025

Arnold: a generalist muscle transformer policy

AS Chiappa, B An, M Simos, C Li, A Mathis

Arnold: a generalist muscle transformer policy
sensorimotor control
robotics
reinforcement learning

arXiv preprint

2025

LLaVAction: evaluating and training multi-modal large language models for action recognition

S Ye*, H Qi*, A Mathis**, MW Mathis**

LLaVAction: evaluating and training multi-modal large language models for action recognition
action recognition
multimodal learning
LLMs

arXiv preprint

2025

Reinforcement learning-based motion imitation for physiologically plausible musculoskeletal motor control

M Simos, AS Chiappa, A Mathis

reinforcement learning
motor control
biomechanics

arXiv preprint

2025

DLC2Action: A Deep Learning-based Toolbox for Automated Behavior Segmentation

E Kozlova, A Bonnetto, A Mathis

behavior analysis
deep learning
neuroscience

bioRxiv

2025

EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models

A Bonnetto*, H Qi*, F Leong, M Tashkovska, M Rad, S Shokur, F Hummel, S Micera, M Pollefeys, A Mathis

datasets
3D kinematics
multimodal learning

NeurIPS (in press)

2024

MammAlps: A multi-view video behavior monitoring dataset of wild mammals in the Swiss Alps

V Gabeff, H Qi, B Flaherty, G Sumbül, A Mathis*, D Tuia*

wildlife conservation
datasets
computer vision

CVPR (highlight)

Open Science & Community Impact

We are passionate about open-source code and making our tools broadly accessible to the scientific community.

40+
Publications
20+
Open sourced tools
NeurIPS Winners

Join Our Team

We are actively looking for undergraduate, master's, and PhD students with interests in behavioral analysis and modeling sensorimotor learning. We also regularly recruit postdoctoral fellows.

Supported By

We gratefully thank our funders who keep the magic alive

Simons Collaboration on Ecological NeuroscienceSwiss National Science FoundationBoehringer Ingelheim FondsEPFL Center for ImagingMicrosoftChan Zuckerberg InitiativeKavli Foundation