Graduate Student and Researcher working at the intersection of explainable AI, ethics, and deep learning.
am2552@rit.edu • github • linkedin • scholar
I'm a graduate student and researcher at RIT, focusing on making AI systems more transparent and ethical. My work spans deepfake detection, bias mitigation in language models, and neural machine translation.
Currently, I'm exploring the intersection of explainable AI and adversarial robustness, developing techniques to make AI systems both interpretable and resilient.
Advancing research in explainability through mechanistic interpretability techniques, uncovering how models process and represent information. Developing frameworks to enhance the transparency and trustworthiness of AI systems across diverse applications.
Investigating biases in LLMs by analyzing their training data and architectures. Designing robust debiasing methodologies to ensure fair and ethical AI, with a focus on mitigating adverse impacts in real-world deployments.
Harnessing the power of CUDA for writing optimized kernels to accelerate computations in deep learning workflows. Exploring applications of CUDA in scaling and enhancing the efficiency of large language models for high-performance AI engineering.
Building multi-modal deepfake detection systems using innovative approaches in neuro-symbolic AI and explainable AI. Enhancing detection accuracy and transparency with layer-wise relevance propagation to tackle emerging challenges in multimedia authenticity.
Leading research with Dr. Matthew Wright on explainable AI for deepfake detection. Developing novel approaches to make detection systems transparent and interpretable through multi-modal analysis and saliency mapping.
Collaborating with Dr. Bartosz Krawczyk to study adversarial vulnerabilities in large language models. Developing frameworks for bias detection and mitigation in AI-generated content.