Yağız Nalçakan, PhD
Postdoctoral Research Fellow in Yonsei University.
 
  I am a Postdoctoral Fellow at the Seamless Trans-X Lab (STL), Yonsei University, working on multispectral imaging and robust visual perception for autonomous vehicles. My research lies at the intersection of computer vision, sensor fusion, and intelligent mobility systems.
I received my Ph.D. in Computer Science from the Izmir Institute of Technology, where my dissertation focused on vehicle maneuver detection for advanced driver assistance systems (ADAS) under the supervision of Prof. Dr. Yalin Bastanlar. My doctoral research focused on vehicle maneuver detection for advanced driver assistance systems (ADAS). During my doctoral studies, I also spent a term as a visiting researcher at Seoul National University’s Vehicle Dynamics and Control Laboratory (VDCL) & the Future Mobility Technology Center (FMTC) supported by a research scholarship from TUBITAK (The Scientific and Technological Research Council of Turkey).
My research interests revolve around computer vision and deep learning, with a specialization in multispectral camera systems, perception in adverse weather scenarios, representation learning, and vision-language modeling.
My current research focus include:
- Multispectral perception: designing datasets and architectures that fuse visible and infrared modalities for detection, segmentation, and depth estimation.
- Image translation and representation learning: leveraging Vision Foundation Models for RGB-to-Multispectral domain adaptation and spectral translation.
- Vision-language and explainable perception: exploring how multimodal models can interpret and describe difficult to understand scenes for safe decision-making.
news
| Oct 25, 2025 | Our work “IVIFormer: Illumination-Aware Infrared-Visible Image Fusion via Adaptive Domain-Switching Cross Attention”, has been published on ICCV 2025 Workshop Proceedings. | 
|---|---|
| Oct 19, 2025 | I organized and served as the Chair of the 2nd Workshop on Multispectral Imaging for Robotics and Automation (MIRA) at the International Conference on Computer Vision (ICCV) 2025. | 
| Apr 10, 2025 | We publicly release the RASMD dataset, a large-scale RGB–SWIR benchmark for multispectral perception in adverse weather, available on arXiv and Huggingface. | 
| Dec 30, 2024 | Our paper “Short-Wave Infrared (SWIR) Imaging for Robust Material Classification: Overcoming Limitations of Visible Spectrum Data” is is published in Applied Sciences. | 
| Dec 8, 2024 | I organized and served as the Chair of the Workshop on Multispectral Imaging for Robotics and Automation (MIRA) at the Asian Conference on Computer Vision (ACCV) 2024. | 
| Sep 25, 2024 | Our new preprint, “Pix2Next: Leveraging Vision Foundation Models for RGB to NIR Image Translation” is now available on arXiv. | 
| Dec 11, 2023 | At the start of 2024, I will begin a new role as a Postdoctoral Researcher at Yonsei University’s Seamless Transportation Lab. Under the guidance of Prof. Shiho Kim, I’ll be working on new challenges of intelligent vehicles and smart mobility. | 
selected publications
service
Journal Reviewer:
- Elsevier Information Fusion (INFFUS)
- Elsevier Computer Vision and Image Understanding (CVIU)
- Elsevier Expert Systems with Applications (ESWA)
- IEEE Transactions on Intelligent Vehicles (T-IV)
- IEEE Transactions on Intelligent Transportation Systems (T-ITS)
Conference Reviewer:
- IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)
- IEEE International Conference on Computer Vision (ICCV)
- AAAI Conference on Artificial Intelligence (AAAI)
- IEEE International Conference on Intelligent Transportation Systems (ITSC)
- IEEE Intelligent Vehicles Symposium (IV)
Program Committee:
- Chair, ICCV 2025 Workshop on Multispectral Imaging for Robotics and Automation (MIRA)
- Chair, ACCV 2024 Workshop on Multispectral Imaging for Robotics and Automation (MIRA)