Youngeun Kim

Hello, I'm Youngeun Kim.

I am an Applied Scientist at Amazon, working on efficient multimodal LLM serving. Previously, I was at Meta Reality Labs, working on time-series neural networks for neuromotor-interface AR/VR applications. I received my Ph.D. in Electrical and Computer Engineering from Yale University (2024), M.S. from KAIST (2020), and B.S. from Sogang University (2018).

News

  • 2026
  • Feb Two papers are accepted to CVPR 2026.
  • 2025
  • Dec One paper is accepted to NeurIPS 2025.
  • July I joined Amazon AWS AI Labs as an Applied Scientist.
  • June One paper is accepted to ICCV 2025.
  • March One paper is accepted to CVPR 2025.
  • 2024
  • July Three papers are accepted to ECCV 2024.
  • July One paper is accepted to MICRO 2024.
  • May I joined Meta Reality Labs as a Machine Learning Research Scientist.
  • May I successfully defended my thesis — Algorithmic Approaches for Empowering Spike-based Machine Intelligence.
  • 2023
  • Dec One paper is accepted to NeurIPS 2023.
  • May I joined Amazon (AWS AI) as a summer intern.
  • May One paper is accepted to Transactions on Machine Learning Research.
  • March One paper is accepted to DAC 2023.
  • Feb One paper is accepted to AAAI 2023.

Selected Publications

For a comprehensive list, please see my Google Scholar.

2026
  • Fig. 1: ZOO-Prune Fig. 1
    ZOO-Prune: Training-Free Token Pruning via Zeroth-Order Gradient Estimation in Vision-Language Models
    Y Kim*, Y Zhang*, H Liu, A Jung, S Lee, S Hong · CVPR 2026 (accepted)
  • Fig. 1: VisRef Fig. 1
    VisRef: Visual Refocusing while Thinking Improves Test-Time Scaling in Multi-Modal Large Reasoning Models
    SS Ghosal*, Y Kim*, Z Li, R Chaudhry, L Xu, H Zhang, J Zablocki, Y Xing, Q Zhang · CVPR 2026 (accepted)
2025
  • Fig. 1: Probabilistic Gaussian Alignment Fig. 1
    Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment
    Y Zhang, Y Kim, YG Choi, H Kim, H Liu, S Hong · NeurIPS 2025
  • Fig. 1: Task Vector Quantization Fig. 1
    Task vector quantization for memory-efficient model merging
    Y Kim*, S Lee*, A Jung*, B Ryu, S Hong · ICCV 2025
  • Fig. 1: Spiking Transformer Fig. 1
    Spiking transformer with spatial-temporal attention
    D Lee, Y Li, Y Kim, S Xiao, P Panda · CVPR 2025
2024
  • Fig. 1: GenQ Fig. 1
    GenQ: Quantization in Low Data Regimes with Generative Synthetic Data
    Y Li, Y Kim, D Lee, S Kundu, P Panda · ECCV 2024
  • Fig. 1: Open-World Dynamic Prompt Fig. 1
    Open-World Dynamic Prompt and Continual Visual Representation Learning
    Y Kim*, J Fang*, Q Zhang, Z Cai, Y Shen, R Duggal, DS Raychaudhuri, Z Tu, et al. · ECCV 2024
  • Fig. 1: One-stage Prompt-based Continual Learning Fig. 1
    One-stage Prompt-based Continual Learning
    Y Kim, Y Li, P Panda · ECCV 2024
  • Fig. 1: LoAS Fig. 1
    LoAS: Fully Temporal-Parallel Dataflow for Dual-Sparse Spiking Neural Networks
    R Yin, Y Kim, D Wu, P Panda · MICRO 2024
  • Fig. 1: Visual Prompts Fig. 1
    Do we really need a large number of visual prompts?
    Y Kim, Y Li, A Moitra, R Yin, P Panda · Neural Networks 2024
  • Fig. 1: In-memory Computing Meets SNN Fig. 1
    When in-memory computing meets spiking neural networks — A perspective on device-circuit-system-and-algorithm co-design
    A Moitra, A Bhattacharjee, Y Li, Y Kim, P Panda · Applied Physics Reviews 2024
2023
  • Fig. 1: SEENN Fig. 1
    SEENN: Towards Temporal Spiking Early-Exit Neural Networks
    Y Li, T Geller, Y Kim, P Panda · NeurIPS 2023
  • Fig. 1: Temporal Information Dynamics Fig. 1
    Exploring Temporal Information Dynamics in Spiking Neural Networks
    Y Kim, Y Li, H Park, Y Venkatesha, A Hambitzer, P Panda · AAAI 2023
2022
  • Fig. 1: Lottery Ticket Hypothesis in SNN Fig. 1
    Exploring Lottery Ticket Hypothesis in Spiking Neural Networks
    Y Kim, Y Li, H Park, Y Venkatesha, R Yin, P Panda · ECCV (Oral) 2022
  • Fig. 1: NAS for SNN Fig. 1
    Neural architecture search for spiking neural networks
    Y Kim, Y Li, H Park, Y Venkatesha, P Panda · ECCV 2022
  • Fig. 1: Neuromorphic Data Augmentation Fig. 1
    Neuromorphic Data Augmentation for Training Spiking Neural Networks
    Y Li, Y Kim, H Park, T Geller, P Panda · ECCV 2022
  • Fig. 1: PrivateSNN Fig. 1
    PrivateSNN: privacy-preserving spiking neural networks
    Y Kim, Y Venkatesha, P Panda · AAAI 2022
  • Fig. 1: Rate Coding vs Direct Coding Fig. 1
    Rate Coding or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks?
    Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda · ICASSP 2022
2021
  • Fig. 1: Domain Adaptation without Source Data Fig. 1
    Domain adaptation without source data
    Y Kim, D Cho, K Han, P Panda, S Hong · IEEE Transactions on Artificial Intelligence 2021
  • Fig. 1: Batch Normalization for SNN Fig. 1
    Revisiting batch normalization for training low-latency deep spiking neural networks from scratch
    Y Kim, P Panda · Frontiers in Neuroscience 2021
2020
  • Fig. 1: Hi-CMD Fig. 1
    Hi-CMD: Hierarchical cross-modality disentanglement for visible-infrared person re-identification
    S Choi, S Lee, Y Kim, T Kim, C Kim · CVPR 2020

Contact

For collaborations, talks, or project inquiries, please reach out via the links below.

Visitor Trends

Daily / Weekly Visitors
Unique visits, cached ~30 minutes.
--
Last 24h
--
Last 7d
Visitor count badge
If counts stay empty, enable “Allow adding visitor counts” in GoatCounter settings.