I am a researcher focused on AI-enhanced collaboration. My work explores how multimodal agents can support spatial reasoning and decision-making in PIRCC (Partial Information, Restricted Communication, Cooperative) scenarios.
I build interactive agents with voice, image, and text interfaces using tools like Gradio, LangChain, Ollama, and multimodal LLMs (e.g., LLaVA, Gemma3, Qwen2.5-VL). My goal is to support creative tasks and assistive applications, with an emphasis on usability, local deployment, and explainability (XAI).
- π€ AI-Enhanced Collaboration
- π§ Multimodal Agents for Spatial Reasoning & Decision-Making
- π£οΈ Voice, Image, and Text Interfaces
- π§© PIRCC Scenarios (Partial Information, Restricted Communication, Cooperative)
- π οΈ Local Deployment & Usability of AI Systems
- π‘ Explainable AI (XAI)
You can find all of my projects at: ainulyaqinmhd.github.io
- Email: ainulyaqinmhd@gmail.com