I am looking for self-motivated graduate students and postdocs who are interested in making ML more efficient.


Simply put, I am trying to figure out how to survive the age of large-scale ML models—neural nets are growing rapidly in size, making the training / inference / other costs unaffordable. This trend opens up many research questions that are left for us to answer, including:

  • Compression: How can we make an already-trained model smaller, so that everybody (with a tiny hardware) can use it?
  • TinyML: How can we effectively train a neural net on a tiny device? If we can communicate with a large-scale model, how much can it help?
  • Efficient Models: Can we make a new model, that is both expressive enough to learn the target function, and light enough to be trained with a small training budget?

I hope to use every tool at hand—algorithm, theory, system-level tricks—that is required to address this challenge.

NOTE: If your interest is something other than efficient ML, but somehow like me a lot so that you want to work with me, you can still contact me to figure out what are other options.


Please send me an email right away: (firstname).(lastname) (at)
To make things faster, attach a CV/transcript/portfolio/anything-similar.

This is not a part of the formal application process (i.e., you should apply to POSTECH EE formally), but perhaps I could walk you through the problems that I am working on, and the recruiting policy of this year.

While waiting for me to answer your emails, you can take a look at these notes: For Graduate students | For Postdocs

Jaeho Lee

ML researcher who also teaches. (firstname).(lastname) (at)