Age-Dependent EEG patterns for Predicting Treatment Response in ADHD
In this project we use EEG patterns to predict treatment responses for individuals with ADHD across different age groups. Project reports are incorporated in the BHS website.
In this project we use EEG patterns to predict treatment responses for individuals with ADHD across different age groups. Project reports are incorporated in the BHS website.
This project was conducted as part of Brainhack School 2025. It aimed to classify major depressive disorder (MDD) using temporal-domain EEG features (i.e., band power), applying both machine learning (SVM) and deep learning (EEGNet) models.
This project explores how deviant auditory tones in a cross-modal oddball paradigm elicit a stronger P300 component using EEG data from the MNE sample dataset. The analysis focuses on ERP comparison and difference waves, setting the stage for future investigations on emotional modulation of P300.
The human sense of smell plays a crucial role in emotional experience. Previous research has shown that EEG can distinguish between pleasant and unpleasant odors at an individual level (Kroupi et al.,2014), but the consistency of these preferences across individuals remain open questions. OPPD dataset: www.epfl.ch/labs/mmspg/downloads/page-119131-en-html
This project investigates whether there are age-dependent EEG patterns for individuals with ADHD and whether these patterns can predict neurofeedback treatment response. Using the ADHD samples from TDBrain database (n=204), we developed a random forest model to characterize age-related EEG biomarkers and assess treatment prediction across different age groups. Our model achieved AUC=0.865, identifying key EEG signatures including theta-beta ratios and frontal low-frequency patterns that vary with age and treatment response.
In our study of three participants, removing the alpha band affected TRFs, with some features being suppressed and others enhanced. This simplification highlighted local signals, making brain activity clearer. However, it’s unclear if these enhanced signals represent true brain activity or noise, requiring further analysis for validation.
Emotion perception is contextualized. However, how emotional context modulates word processing is unclear. We regression-fitted raw EEG data to test for emotional valence effects. The results revealed a widespread effect of context valnece, as well as a plausibility N400 waveform, well replicating the past ERP literature. Moreover, as we plan on conducting a subsequent experiment to follow up on the findings the present study has revealed, this project also includes the code for constructing experimental stimuli.
In this project, we aim to use machine learning on EEG data from participants’ language learning tasks on Duolingo. Specifically, we ask if EEG features can predict whether the participant has gotten a task right or wrong when they receive feedback. Using a k-nearest neighbours classifier, we achieve 98% accuracy in determining correct or incorrect answers based on EEG voltages from 8 electrodes.
Can we automatically detect changes in emotions given a user’s biosignals? In this project, we used multimodal biosignal data to predict the target emotion of audiovisual stimuli.
ADHD subtypes are a controversial aspect of ADHD literature. Most subtypes classifications are based on behavioral and cognitive data but lack biomarkers. Using a multimodal dataset comprised of EEG data as well as self-reported symptoms and behavioral data, we tried to predict the DSM subtypes of each of our 96 participants. Since ADHD has been noted to present itself differently across sexes, we also tried to predict sex. At-rest eeg data and behavioral data proved to be poor predictors of the DSM subtypes. However, self-reported symptoms were a rich predictor of ADHD subtype. Additionally, predicting sex using EEG data yielded the highest decoding accuracies.
Copyright (c) 2025, BrainHack School; all rights reserved.
Template by Bootstrapious. Ported to Hugo by DevCows.