Cooperative Human-AI Teaming (CHAT) Lab, directed by Dr. Abdullah Khan, conducts multidisciplinary research at the intersection of artificial intelligence (AI), machine learning (ML), and human-centered computing. Our mission is to advance the science and engineering of human-AI collaboration by developing computational frameworks that enable intelligent systems to support, complement, and enhance human decision-making and performance across diverse domains.
Our work integrates natural language processing (NLP), wearable sensing, and advanced learning architectures to build systems that adapt intelligently to human needs and contexts.
We aim to design explainable, trustworthy, and adaptive AI systems that foster seamless cooperation between humans and machines by analyzing multimodal human behavioral data and uncovering latent cognitive and interactional patterns.
Students, collaborators, and community partners: reach out to Dr. Abdullah Khan. Get
Involved to discuss openings and projects!
Dr. Abdullah Khan
470-578-4286
mkhan74@kennesaw.edu
We have the following research themes and projects:
Human-AI Collaboration for Efficient and Effective Teaming:
This project explores bidirectional human-AI collaboration, where humans and AI systems learn from each other to achieve shared goals with minimal human intervention. We design co-adaptive learning frameworks that enable mutual understanding, efficient task execution, and continuous performance improvement.
Our models leverage: Natural Language Processing (NLP), Reinforcement Learning (RL), human-in-the-loop modeling, and interactive learning paradigms for reciprocal knowledge exchange. By reducing cognitive workload and enhancing mutual situational awareness, this work seeks to create AI teammates that evolve alongside human partners, supporting dynamic, transparent, and high-trust collaboration.
Collaborative Explainable Artificial Intelligence (XAI):
We advance the emerging paradigm of collaborative explainable AI, where explanation is treated as an interactive, human-AI co-construction process rather than a one-way output. Our work develops language-driven, conversational explanation mechanisms that allow humans and AI systems to refine interpretive models of reasoning jointly.
Drawing from cognitive psychology, linguistics, and human-computer interaction, we aim to enhance: Interpretability, Accountability, and Shared Situational Understanding. This research supports trustworthy AI deployment in high-stakes domains where interpretability and collaboration are essential.
Knowledge-Guided Learning:
This project explores knowledge-guided machine learning to bridge the gap between symbolic reasoning and deep learning. We integrate domain knowledge, semantic ontologies, and knowledge graphs into data-driven models to improve generalization, robustness, and explainability under limited or noisy data.
Our approaches constrain and guide model learning using linguistic and semantic structures, allowing AI systems to reason more effectively, transfer insights across tasks, and incorporate human expertise directly into learning processes.
Behavioral Health Analysis from 911 Narratives:
This project applies advanced NLP and deep learning methods to analyze free-text narratives from emergency (911) police reports. By detecting linguistic markers of distress, crisis escalation, and mental health risk factors, we aim to support public health surveillance, crisis intervention, and first-responder, co-responder training.
Our work develops ethically responsible NLP pipelines that: Identify behavioral and emotional indicators from police narratives, develop interactive dashboards for co-responders, decriminalize mental health through providing follow-up suggestions to care access, and break the barrier. This research bridges NLP and behavioral health, informing data-driven strategies for mental-health crisis response and policy innovation.
Privacy-preserved, trustworthy Security AI:
Our research focuses on developing trustworthy, privacy-preserving, and secure AI systems that uphold integrity, transparency, and ethical responsibility. We design algorithms for federated and differentially private learning, strengthen robustness against adversarial and prompt-based attacks, and advance explainable and accountable AI frameworks for large language models (LLMs) and multimodal systems.
We aim to ensure that AI systems in sensitive domains remain secure, reliable, and aligned with human values, enabling safe and transparent human-AI collaboration.
Principal Investigator
Md Abdullah Al Hafiz Khan
Dr. Hafiz Khan is an Assistant Professor in the Computer Science department at Kennesaw State University (91青青草). He directs the Ubiquitous Data Mining (UDM) Lab at 91青青草. Before joining 91青青草, he was a scientist in the Artificial Intelligence Lab at Philips Research North America, Cambridge, MA.
He obtained his Ph.D. in Information Systems from the University of Maryland, Baltimore County (UMBC). He received a BSc in Computer Science and Engineering from Bangladesh University of Engineering & Technology (BUET). His research focuses on human-AI teaming, Natural Language Processing, and their applied intelligence applications in the healthcare domain.
Ph.D. Students
Graduate Students
Undergraduate Students
Past Students
Adnan Azmee conducts presentation at CHASE 2024!
Congratulations to our first PhD Student, Dr. Martin!
Our 5 paper published in ACM/IEEE CHASE 2024!
2025
2024