Bridging AI Innovation and Human Insight

Cooperative Human-AI Teaming (CHAT) Lab, directed by Dr. Abdullah Khan, conducts multidisciplinary research at the intersection of artificial intelligence (AI), machine learning (ML), and human-centered computing. Our mission is to advance the science and engineering of human-AI collaboration by developing computational frameworks that enable intelligent systems to support, complement, and enhance human decision-making and performance across diverse domains.

Our work integrates natural language processing (NLP), wearable sensing, and advanced learning architectures to build systems that adapt intelligently to human needs and contexts.

We aim to design explainable, trustworthy, and adaptive AI systems that foster seamless cooperation between humans and machines by analyzing multimodal human behavioral data and uncovering latent cognitive and interactional patterns.

Students, collaborators, and community partners: reach out to Dr. Abdullah Khan. Get Involved to discuss openings and projects!

Dr. Abdullah Khan
470-578-4286
mkhan74@kennesaw.edu 

91青青草 chatlab logo

Projects

We have the following research themes and projects:

  • Human Ai Teaming

    This project explores bidirectional human-AI collaboration, where humans and AI systems learn from each other to achieve shared goals with minimal human intervention. We design co-adaptive learning frameworks that enable mutual understanding, efficient task execution, and continuous performance improvement.

    Our models leverage: Natural Language Processing (NLP), Reinforcement Learning (RL), human-in-the-loop modeling, and interactive learning paradigms for reciprocal knowledge exchange. By reducing cognitive workload and enhancing mutual situational awareness, this work seeks to create AI teammates that evolve alongside human partners, supporting dynamic, transparent, and high-trust collaboration.

  • We advance the emerging paradigm of collaborative explainable AI, where explanation is treated as an interactive, human-AI co-construction process rather than a one-way output. Our work develops language-driven, conversational explanation mechanisms that allow humans and AI systems to refine interpretive models of reasoning jointly.

    Drawing from cognitive psychology, linguistics, and human-computer interaction, we aim to enhance: Interpretability, Accountability, and Shared Situational Understanding. This research supports trustworthy AI deployment in high-stakes domains where interpretability and collaboration are essential.

  • This project explores knowledge-guided machine learning to bridge the gap between symbolic reasoning and deep learning. We integrate domain knowledge, semantic ontologies, and knowledge graphs into data-driven models to improve generalization, robustness, and explainability under limited or noisy data.

    Our approaches constrain and guide model learning using linguistic and semantic structures, allowing AI systems to reason more effectively, transfer insights across tasks, and incorporate human expertise directly into learning processes.

  • Behavioral Health Analysis from 911 Narratives

    This project applies advanced NLP and deep learning methods to analyze free-text narratives from emergency (911) police reports. By detecting linguistic markers of distress, crisis escalation, and mental health risk factors, we aim to support public health surveillance, crisis intervention, and first-responder, co-responder training.

    Our work develops ethically responsible NLP pipelines that: Identify behavioral and emotional indicators from police narratives, develop interactive dashboards for co-responders, decriminalize mental health through providing follow-up suggestions to care access, and break the barrier. This research bridges NLP and behavioral health, informing data-driven strategies for mental-health crisis response and policy innovation.

  • Our research focuses on developing trustworthy, privacy-preserving, and secure AI systems that uphold integrity, transparency, and ethical responsibility. We design algorithms for federated and differentially private learning, strengthen robustness against adversarial and prompt-based attacks, and advance explainable and accountable AI frameworks for large language models (LLMs) and multimodal systems.

    We aim to ensure that AI systems in sensitive domains remain secure, reliable, and aligned with human values, enabling safe and transparent human-AI collaboration.

Meet Our Team

Publications

    1. Trust-Aware Human-AI Teaming Framework for Fake News Detection Using LLMs
      • Abdul Muntakim, Sai Sanjay Potluri, Md Abdullah Al Hafiz Khan, Yong Pei
        IEEE ICMLA 2025. Accepted.
    2. Large language model enabled synthetic dataset generation for human-AI teaming in mental health assessment
      • Sai Sanjay Potluri, Md Abdullah Al Hafiz Khan and Yong Pei
        Aims ACI Jounral 2025. Published.
    3. Large language model enabled mental health app recommendations using structured datasets
      • Kris Prasad, Md Abdullah Al Hafiz Khan, Yong Pei.
        Aims ACI Jounral 2025. Published.
    4. Human AI Collaboration Framework for Detecting Mental Illness Causes from Social Media
      • Abm. Adnan Azmee, Francis Nweke, Md Abdullah Al Hafiz Khan, Yong Pei.
        IEEE CHASE 2025. Published.
    5. BrainDil: Enhanced and Efficient Brain Tumor Classification in MRI Images using Dilated Convolution.
      • Ryan Deem, Md Abdullah Al Hafiz Khan, Garrett Goodman, Michail Alexiou.
        The Sixteenth IEEE International Conference on Information, Intelligence, Systems and Applications (IISA 2025). Published. Best paper award.
    • 鈥淎 transformer-driven framework for multi-label behavioral health classification in police narratives.鈥
      • Francis Nweke, Abm Adnan Azmee, Md Abdullah Al Hafiz Khan, Yong Pei, Dominic Thomas, Monica Nandan
        Published AIMS Press ACI journal. 2024, Volume 4, Issue 2: 234-252. doi: 10.3934/aci.2024014
    • Explainable Multi-Label Classification Framework for Behavioral Health Based on Domain Concepts.
      • Francis Nweke, Abm. Adnan Azmee, Md Abdullah Al Hafiz Khan, Yong Pei, Dominic Thomas, and Monica Nandan.
        IEEE Bigdata 2024. Published.
    • 鈥淒omain Knowledge-Driven Multi-Label Behavioral Health Identification from Police Report.鈥
      • Abm Adnan Azmee, Francis Nweke, Md Abdullah Al Hafiz Khan, Yong Pei, Dominic Thomas, and Monica Nandan.
        IEEE Bigdata 2024. Published.
    • Combined Correlational Network for Identifying Behavioral Health Cases from First Responder Report
      • Mason Pederson, Abm Adnan Azmee, Md Abdullah Al Hafiz Khan, Yong Pei, Dominic Thomas, Monica Nandan, and Francis Nweke.
        IEEE Bigdata 2024. Published.Runner-up Best paper award.
    • Demo: CaseFinder: Automated Visual and Quantitative Analysis Tool for Police Narrative
      • Martin Brown, Dominic Thomas, Md Abdullah Al Hafiz Khan, Abm adnan Azmee, Monica Nandan, Yong Pei.
        IEEE/ACM CHASE 2024. Published.
    • Poster: Extracting and Annotating Mental Health Forum Corpus: A Comprehensive Validation Pipeline
      • Rohith Sundar Jonnalagadda; Abm. Adnan Azmee; Dinesh Attota; Md Abdullah Al Hafiz Khan; Yong Pei.
        IEEE/ACM CHASE 2024. Published.
    • Large Language Models Performance Comparison of Emotion and Sentiment Classification
      • Author(s): Will Stigall, Md Abdullah Al Hafiz Khan, Dinesh Attota, Francis Nweke, Yong Pei
        ACMSE 2024. Published.
    • Automated Alphabet Detection from Brain Waves
      • Author(s): Christopher Dargan, Francis Nweke, Md Abdullah Al Hafiz Khan, Abm Adnan Azmee, Yong Pei
        ACMSE 2024. Published.
    • Adaptive Attention Aware Fusion for Human-in-Loop Behavioral Health Detection.
      • Martin Brown, Abm. Adnan Azmee, Md Abdullah Al Hafiz Khan, Dominic Thomas, Yong Pei and Monica Nandan.
        ACM CHASE 2024 Published.
    • CBSA: A Deep Transfer Learning Framework for Assessing Post-Stroke Exercises
      • Manohar Murikipudi, Abm. Adnan Azmee, Md Abdullah Al Hafiz Khan, and Yong Pei.
        ACM CHASE 2024 Published.
    • Semantic Learning and Attention Dynamics for Behavioral Classification in Police Narratives.
      • Dinesh Attota, Abm. Adnan Azmee, Md Abdullah Al Hafiz Khan, Dominic Thomas, Yong Pei and Monica Nandan.
        ACM CHASE 2024 Published.
    • Exploring the Impact of CM-II Meditation on Stress Levels in College Students through HRV Analysis.
      • Sreekanth Gopi, Nasrin Dehbozorgi, and Md Abdullah Al Hafiz Khan.
        ASEE South East Selection Meeting 2024.

Sponsors: