I usually publish at ML conferences. Sometimes I write off-expertise articles, too.

Spread spurious attribute: Improving worst-group accuracy with spurious attribute estimation
Junhyun Nam, Jaehyung Kim, JL, and Jinwoo Shin
ICLR 2022

Meta-learning sparse implicit neural representations
{JL, Jihoon Tack}eq, Namhoon Lee, and Jinwoo Shin
NeurIPS 2021 (also at SNN 2021)

GreedyPrune: Layer-wise optimization algorithms for magnitude-based pruning
Vinoth Nandakumar, and JL
SNN 2021

Co2L: Contrastive continual learning
Hyuntak Cha, JL, and Jinwoo Shin
ICCV 2021

Provable memorization via deep neural networks using sub-linear parameters
Sejun Park, JL, Chulhee Yun, and Jinwoo Shin
COLT 2021 (also at Deepmath 2020)

Layer-adaptive sparsity for the magnitude-based pruning
JL, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin
ICLR 2021

Minimum width for universal approximation
Sejun Park, Chulhee Yun, JL, and Jinwoo Shin
ICLR 2021 spotlight (also at Deepmath 2020)

MASKER: Masked keyword regularization for reliable text generation
{Seung Jun Moon, Sangwoo Mo}eq, Kimin Lee, JL, and Jinwoo Shin
AAAI 2021

Learning bounds for risk-sensitive learning
JL, Sejun Park, and Jinwoo Shin
NeurIPS 2020

Learning from failure: Training debiased classifier from biased classifier
Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, JL, and Jinwoo Shin
NeurIPS 2020

Lookahead: A far-sighted alternative of magnitude-based pruning
{Sejun Park, JL}eq, Sangwoo Mo, and Jinwoo Shin
ICLR 2020

Learning finite-dimensional coding schemes with nonlinear reconstruction maps
JL and Maxim Raginsky

Minimax statistical learning with Wasserstein distances
JL and Maxim Raginsky
NeurIPS 2018 spotlight

On MMSE estimation from quantized observations in the nonasymptotic regime
JL, Maxim Raginsky, and Pierre Moulin
ISIT 2015

Jaeho Lee

ML researcher who also teaches. (firstname).(lastname) (at)