Yağız Nalçakan, PhD

Postdoctoral Research Fellow in Yonsei University.

I am a Postdoctoral Fellow at the Seamless Trans-X Lab (STL), Yonsei University, working on multispectral imaging and robust visual perception for autonomous vehicles. My research lies at the intersection of computer vision, sensor fusion, and intelligent mobility systems.

I received my Ph.D. in Computer Science from the Izmir Institute of Technology, where my dissertation focused on vehicle maneuver detection for advanced driver assistance systems (ADAS) under the supervision of Prof. Dr. Yalin Bastanlar. My doctoral research focused on vehicle maneuver detection for advanced driver assistance systems (ADAS). During my doctoral studies, I also spent a term as a visiting researcher at Seoul National University’s Vehicle Dynamics and Control Laboratory (VDCL) & the Future Mobility Technology Center (FMTC) supported by a research scholarship from TUBITAK (The Scientific and Technological Research Council of Turkey).

My research interests revolve around computer vision and deep learning, with a specialization in multispectral camera systems, perception in adverse weather scenarios, representation learning, and vision-language modeling.

My current research focus include:

  • Multispectral perception: designing datasets and architectures that fuse visible and infrared modalities for detection, segmentation, and depth estimation.
  • Image translation and representation learning: leveraging Vision Foundation Models for RGB-to-Multispectral domain adaptation and spectral translation.
  • Vision-language and explainable perception: exploring how multimodal models can interpret and describe difficult to understand scenes for safe decision-making.

news

Oct 25, 2025 Our work “IVIFormer: Illumination-Aware Infrared-Visible Image Fusion via Adaptive Domain-Switching Cross Attention”, has been published on ICCV 2025 Workshop Proceedings.
Oct 19, 2025 I organized and served as the Chair of the 2nd Workshop on Multispectral Imaging for Robotics and Automation (MIRA) at the International Conference on Computer Vision (ICCV) 2025.
Apr 10, 2025 We publicly release the RASMD dataset, a large-scale RGB–SWIR benchmark for multispectral perception in adverse weather, available on arXiv and Huggingface.
Dec 30, 2024 Our paper “Short-Wave Infrared (SWIR) Imaging for Robust Material Classification: Overcoming Limitations of Visible Spectrum Data” is is published in Applied Sciences.
Dec 8, 2024 I organized and served as the Chair of the Workshop on Multispectral Imaging for Robotics and Automation (MIRA) at the Asian Conference on Computer Vision (ACCV) 2024.
Sep 25, 2024 Our new preprint, “Pix2Next: Leveraging Vision Foundation Models for RGB to NIR Image Translation” is now available on arXiv.
Dec 11, 2023 At the start of 2024, I will begin a new role as a Postdoctoral Researcher at Yonsei University’s Seamless Transportation Lab. Under the guidance of Prof. Shiho Kim, I’ll be working on new challenges of intelligent vehicles and smart mobility.

selected publications

  1. IVIFormer: Illumination-Aware Infrared-Visible Image Fusion via Adaptive Domain-Switching Cross Attention
    Park, Incheol, Jin, Youngwan,  Nalcakan, Yagiz, Ju, Hyeongjin, Yeo, Sanghyeop, and Kim, Shiho
    In Proceedings of the IEEE/CVF International Conference on Computer Vision 2025
  2. RASMD: RGB And SWIR Multispectral Driving Dataset for Robust Perception in Adverse Conditions
    Jin, Youngwan, Kovac, Michal,  Nalcakan, Yagiz, Ju, Hyeongjin, Song, Hanbin, Yeo, Sanghyeop, and Kim, Shiho
    arXiv preprint arXiv:2504.07603 2025
  3. Pix2Next: Leveraging Vision Foundation Models for RGB to NIR Image Translation
    Jin, Youngwan, Park, Incheol, Song, Hanbin, Ju, Hyeongjin,  Nalcakan, Yagiz, and Kim, Shiho
    arXiv preprint arXiv:2409.16706 2024
  4. Lane Change Detection with an Ensemble of Image-based and Video-based Deep Learning Models
    Nalcakan, Yagiz, and Bastanlar, Yalin
    In 2023 Innovations in Intelligent Systems and Applications Conference (ASYU) 2023
  5. Cut-in maneuver detection with self-supervised contrastive video representation learning
    Nalcakan, Yagiz, and Bastanlar, Yalin
    Signal, Image and Video Processing 2023
  6. Monocular Vision-Based Prediction of Cut-In Manoeuvres with LSTM Networks
    Nalcakan, Yagiz, and Bastanlar, Yalin
    In Science, Engineering Management and Information Technology 2023
  7. Overview of Machine Learning Approaches for Wireless Communication
    Ensari, Tolga,  Nalçakan, Yağız, Günay, Melike, and Yıldız, Eyyüp
    In Science, Engineering Management and Information Technology 2019
  8. Automatic HTML code generation from mock-up images using machine learning techniques
    Aşıroğlu, Batuhan, Mete, Büşta Rümeysa, Yıldız, Eyyüp,  Nalçakan, Yağız, Sezen, Alper, Dağtekin, Mustafa, and Ensari, Tolga
    In 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT) 2019
  9. Decision of Neural Networks Hyperparameters with a Population-based Algorithm
    Nalçakan, Yağız, and Ensari, Tolga
    In The Fourth International Conference on Machine Learning, Optimization, and Data Science 2018
  10. Digital Data Forgetting: A Machine Learning Approach
    Günay, Melike, Yıldız, Eyyüp,  Nalçakan, Yağız, Aşıroğlu, B, Zencirli, A, Mete, B, and Ensari, Tolga
    In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) 2018

service

Journal Reviewer:

  • Elsevier Information Fusion (INFFUS)
  • Elsevier Computer Vision and Image Understanding (CVIU)
  • Elsevier Expert Systems with Applications (ESWA)
  • IEEE Transactions on Intelligent Vehicles (T-IV)
  • IEEE Transactions on Intelligent Transportation Systems (T-ITS)

Conference Reviewer:

  • IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)
  • IEEE International Conference on Computer Vision (ICCV)
  • AAAI Conference on Artificial Intelligence (AAAI)
  • IEEE International Conference on Intelligent Transportation Systems (ITSC)
  • IEEE Intelligent Vehicles Symposium (IV)

Program Committee: