Jaeho Lee
Postdoctoral researcher @ ALINLAB, a research group led by Prof. Jinwoo Shin at KAIST.
Contact: jaeholee [at] kaist [dot] ac [dot] kr
Research interests
I like to ponder upon how operational constraints (e.g., robustness, fairness, risksensitivity, sparsity) alter the generalization properties of learning algorithms. My researches tend to lie in
span{ Statistical learning theory, Theory of deep learning, Neural network pruning, Highdimensional statistics }.
Education
 Ph.D. in ECE@UIUC, May 2019 (advisor: Maxim Raginsky ).
 Thesis title: Robustness and generalization guarantees for statistical learning of generative models.
 M.S. in ECE@UIUC, December 2015.
 B.S. in EE+MS@KAIST with summa cum laude, February 2013 (advisor: Yung Yi)
Publications

Learning bounds for risksensitive learning
J.L., Sejun Park, and Jinwoo Shin
under review, 2020.

Minimal width for universal approximation
Sejun Park, Chulhee Yun, J.L., and Jinwoo Shin
under review, 2020.

Learning from failure: Training debiased classifier from biased classifier
Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, J.L., and Jinwoo Shin
under review, 2020.
 A deeper look into the layerwise sparsity of magnitudebased pruning
J.L., Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin
under revision, 2020.

Lookahead: A farsighted alternative of magnitudebased pruning
Sejun Park*, J.L.*, Sangwoo Mo, and Jinwoo Shin
ICLR 2020 (* equal contribution).

Learning finitedimensional coding schemes with nonlinear reconstruction maps
J.L. and Maxim Raginsky
SIAM Journal on Mathematics of Data Science (SIMODS) 2019.

Minimax statistical learning with Wasserstein distances
J.L. and Maxim Raginsky
NeurIPS 2018 .

On MMSE estimation from quantized observations in the nonasymptotic regime
J.L., Maxim Raginsky, and Pierre Moulin
ISIT 2015.
Teaching
Invited Talks
 “Lookahead: A farsighted alternative of magnituebased pruning,” ICLR social: ML researchers in/interested in Korea, May 2020.
 “Minimax Statistical Learning with Wasserstein distances,” IFORS,
June 2020 (postponed to 2021, due to COVID19).
 “Statistical learning perspectives on neural nets (and pruning them),” Postech CS Seminar, December 2019.
 “Minimax Statistical Learning with Wasserstein distances,” INFORMS annual meeting, October 2019.
 “Minimax Learning: with implications on domain adaptation and adversarial attack,” Naver Tech Talk, January 2019.
Awards & Honors
 Travel awards: ISIT 2015, NeurIPS 2018, University of Illinois Graduate College Fall 2018.
 ASAN fellow 2013.
 National Science and Engineering Undergraduate Scholarship, Korea, 2009–2012.
Refereeing
 Conferences: {ACML, NeurIPS, ICML, AISTATS}_{2019}, {AAAI, ICML, AISTATS, IJCAI, NeurIPS}_{2020}, {ICLR}_{2021}.
 Journals: {Machine Learning}.
Miscellaneous
 I worked as an ASAN fellow at The Heritage Foundation, from May to July 2013,
 I was a founderlibrarian at Urbana nanolibrary (now defunct), where I lent 300+ books to 40+ members, from December 2013 to August 2018.
(last updated: June 16, 2020.)