About Me

I am Pranav Jeevan P, a Research Scientist at sync, where I develop advanced AI-driven video editing tools. My work focuses on designing and implementing generative architectures—spanning diffusion models, GANs, and transformer-based networks—to enable precise, controllable modification of human appearance, motion, and expression in video sequences.
I earned my Ph.D. in Artificial Intelligence from the Department of Electrical Engineering at the Indian Institute of Technology Bombay, where I developed resource-efficient neural architectures for various computer vision tasks such as classification, segmentation, inpainitng and super-resolution. During my doctoral studies, I was associated with the MeDAL (Medical Imaging, Deep Learning, and Artificial Intelligence Lab) under the supervision of Prof. Amit Sethi.
Prior to my Ph.D., I completed a Master’s in Robotics at the Department of Mechanical Engineering, Indian Institute of Technology Kanpur, where I was part of the Center for Mechatronics. Under the guidance of Prof. Ashish Dutta, I designed and prototyped a lower-extremity exoskeleton for rehabilitation applications.
I began my professional career as a Post-Graduate Engineering Trainee at the Engineering Research Centre of Tata Motors Limited, where I conducted vehicle performance and thermal analysis for braking systems. Subsequently, I returned to academia at the Department of Physics, IIT Madras, focusing on theoretical physics, quantum computing, and quantum information under Prof. Vaibhav Madhok.
I also completed a six-month internship (July 2023–January 2024) with the AI Camera Team of Visual Intelligence Division at Samsung R&D Institute India, Bangalore (SRI-B), where I developed and optimized deep learning models for image classification, object detection, and generative tasks. These models have been integrated into Samsung’s flagship Galaxy S24 series.
I regularly serve as a reviewer for premier conferences in computer vision and machine learning, including CVPR, ICCV, ECCV, ICLR, AAAI, and WACV.

Recent Updates

  1. Our paper “Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision” has been accepted in the TMLR Journal.
  2. Our paper “Evaluation Metric for Quality Control and Generative Models in Histopathology Images” has been accepted in ISBI 2025.
  3. Our paper “WaveMixSR-V2: Enhancing Super-resolution with Higher Efficiency” has been accepted for the AAAI 2025 Student Abstract and Poster Program (oral presentation).
  4. Our paper “FLeNS: Federated Learning with Enhanced Nesterov-Newton Sketch” has been accepted for the Special Session on Federated Learning at IEEE BigData 2024.
  5. Our paper “Adversarial Transport Terms for Unsupervised Domain Adaptation” has been accepted in ICPR 2024.
  6. My work during internship at Samsung Research was published as “PawFACS: Leveraging Semi-Supervised Learning for Pet Facial Action Recognition” at BMVC 2024. A patent has also been filed.
  7. Our paper “A Comparative Study of Deep Neural Network Architectures in Magnification Invariant Breast Cancer Histopathology Image Analysis” has been accpeted in CCIS.
  8. Our paper “Magnification Invariant Medical Image Analysis: A Comparison of Convolutional Networks, Vision Transformers, and Token Mixers” has been accepted in Bioimaging 2024 and won the Best Student Paper Award.
  9. Our paper “WaveMixSR: Resource-efficient Neural Network for Image Super-resolution” has been accepted in WACV 2024.
  10. Our paper “Heterogeneous Graphs Model Spatial Relationships Between Biological Entities for Breast Cancer Diagnosis” has been accepted in the 5th MICCAI Workshop on GRaphs in biomedicAl Image anaLysis (GRAIL) 2023.
  11. Our tiny paper “Resource-efficient Image Inpainting” has been accepted in ICLR 2023.
  12. Our paper “Resource-efficient Hybrid X-Formers for Vision” has been accepted in WACV 2022.
  13. Our paper “So You Think You’re Funny?”: Rating the Humour Quotient in Standup Comedy” has been accepted in EMNLP 2021.