Here are some boring details about me.


Ph.D. (2019) ECE@UIUC (Advisor: Maxim Raginsky)
M.S. (2015) ECE@UIUC (Advisor: Maxim Raginsky)
B.S. (2013) {EE,MS}1@KAIST (Advisor: Yung Yi) summa cum laude


Mar 2019–Mar 2022 Postdoc researcher, ALIN@KAIST (Host: Jinwoo Shin)
May 2013–Jul 2013 ASAN fellow, Center for data analysis@The Heritage Foundation (Supervisors: James Sherk and Salim Furth).


FA17 TA, ECE563@UIUC Information theory.
SP17 TA, ECE498@UIUC Introduction to stochastic systems.
FA15 TA, ECE598@UIUC Statistical learning theory.


  • “A search for a smaller neural networks… hiding inside a bigger net,” Keynote at KAIST AI workshop 21/22, Jan 2022.
  • “Learning guarantees under distributional shifts: Wasserstein perturbation and Conditional Value-at-Risk,” IFORS, August 2021.
  • “Approximation power of neural networks: old and new,” UNIST AI, May 2021.
  • “Learning bounds for risk-sensitive learning,” NeurIPS social: ML in Korea, Dec 2020.
  • “A deeper look at the layerwise sparsity of magnitude-based pruning,” Sparsity reading group @ Google, Nov 2020.
  • “Lookahead: A far-sighted alternative of magnitue-based pruning,” ICLR social: ML researchers in/interested in Korea, May 2020.
  • “Statistical learning perspectives on neural nets (and pruning them),” Postech AI, Dec 2019.
  • “Minimax statistical learning with Wasserstein distances,” INFORMS annual meeting, Oct 2019.
  • “Minimax learning: with implications on domain adaptation and adversarial attack,” Naver, Jan 2019.


  • Conferences: {ACML, NeurIPS, ICML, AISTATS}2019, {AAAI, ICML, AISTATS, IJCAI, NeurIPS}2020, {ICLR, AAAI, AISTATS, ICML, NeurIPS}2021.
  • Journals: {Machine Learning, IEEE ToN}.


  • I was a founder-librarian at Urbana nanolibrary (now defunct), where I lent 300+ books to 40+ members, from December 2013 to August 2018. The books are now at the Champaign public library.

  1. double-majored electrical engineering and management science. ↩︎

Jaeho Lee

ML researcher who also teaches. (firstname).(lastname) (at) postech.ac.kr