Jaeho Lee

I am a postdoc at the algorithmic intelligence laboratory at KAIST, working with Jinwoo Shin (mandatory military service:– ending February 2022).

Before joining here, I completed my Ph.D. under the guidance of Maxim Raginsky, in the middle of the lovely cornfields of Urbana-Champaign. Even prior to this, I was an undergraduate student at KAIST, double-majoring the electrical engineering and the management science.

My research focuses on analyzing the impact of operational constraints on the generalization/approximation capabilities of learning algorithms; the constraints may be about robustness, fairness, risk-sensitivity, or sparsity. As a research tool, I like to use the machineries from the statistical learning theory (all hail Vapnik!), high-dimensional statistics, and computational libraries for machine learning.

For any inquiries (or CV), please contact me via email: jaeho-lee [at] kaist [dot] ac [dot] kr. I may respond even faster via Twitter: @jaeho_lee_

(I don’t update Google scholar too often, but here’s a link.)



papers

Provable memorization via deep neural networks using sub-linear parameters
Sejun Park, JL, Chulhee Yun, and Jinwoo Shin
Preprint, 2020.
(Sejun gave a 20-min talk at Deepmath 2020: here’s a video)

Layer-adaptive sparsity for the magnitude-based pruning
JL, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin
ICLR 2021.

Minimal width for universal approximation
Sejun Park, Chulhee Yun, JL, and Jinwoo Shin
ICLR 2021.
(Sejun gave a talk at Deepmath 2020: here’s a video)

MASKER: Masked Keyword Regularization for Reliable Text Classification
{Seung Jun Moon, Sangwoo Mo}equal, Kimin Lee, JL, and Jinwoo Shin
AAAI, 2021.

Learning bounds for risk-sensitive learning
JL, Sejun Park, and Jinwoo Shin
NeurIPS 2020.
[code | slide | poster]

Learning from failure: Training debiased classifier from biased classifier
Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, JL, and Jinwoo Shin
NeurIPS 2020.

Lookahead: A far-sighted alternative of magnitude-based pruning
{Sejun Park, JL}equal, Sangwoo Mo, and Jinwoo Shin
ICLR 2020.

Learning finite-dimensional coding schemes with nonlinear reconstruction maps
JL and Maxim Raginsky
SIMODS 2019.

Minimax statistical learning with Wasserstein distances
JL and Maxim Raginsky
NeurIPS 2018.
[spotlight talk]

On MMSE estimation from quantized observations in the nonasymptotic regime
JL, Maxim Raginsky, and Pierre Moulin
ISIT 2015.



teaching



invited talks



honors & awards



refereeing



miscellaneous



(last updated: January 13, 2020.)