Elad Eban

אלעד אבן

I am a research scientist at Google AI Perception in Mountain View, California. I am always learning, and also trying to help machines learn, faster and better. My main research interests vary over time and include deep-network structure learning, probabilistic modeling, robust multi-label classification and weakly supervised learning.

I received my Ph.D. from the School of Computer Science and Engineering at The Hebrew University of Jerusalem. My advisors were Prof. Amir Globerson and Prof. Shai Shalev-Shwartz.

In the past I was more interested in Quantum Computation, a field in which I completed my M.Sc. under the supervision of Prof. Dorit Aharonov, graduating magna cum laude. Before that, I completed my B.Sc. degree in Computer Science and Computational Biology summa cum laude.

Please see my linkedIn resume for more information, or my scholar page for the full list of publications.

Projects

Neural Network Structure Learning

The design of deep neural networks has traditionally been a tedious, manual process of trial and error strongly powered by intuition and lacking rigorous methodologies. On the other hand, recent powerful automated architecture search algorithms tend to be prohibitively expensive to train, and their results depend on a manually well crafted search space and proxy tasks which are, again, based on intuition.

To address this we invented MorphNet, a simple Architecture Search algorithm which learns the number of neurons in each layer jointly with the model weights - in a single training run. The approach has minimal overhead which allows it to be applied to Billion-item datasets and is flexible enough to handle (almost) any type of deep architecture. The method can be set to target specific constrained resources such as FLOPs or model size when designing the architecture.

The MorphNet algorithm was used to speed-up and improve architectures in various modalities on academic benchmarks, and is used by multiple projects within Google.

MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks. Ariel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, Edward Choi. CVPR 2018.

Faster Neural Nets with Hardware-Aware Architecture Learning. Elad Eban, Yair Movshovitz-Attias, Andrew Poon, Max Moroz. GTC 2019 [slides].

Fine-grained stochastic architecture search. Shraman Ray Chaudhuri, Elad Eban, Hanhan Li, Max Moroz, Yair Movshovitz-Attias. [arxiv]

Used in other Google papers:

Sky Optimization: Semantically Aware Image Processing of Skies in Low-Light Photography. Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, Jonathan T Barron. CVPRW 2020

Computationally efficient neural image compression. Nick Johnston, Elad Eban, Ariel Gordon, Johannes Ballé. [arxiv 2019]




Code [NEW March 2019]

Get What You Train For: Learning With Objectives That Match Your Evaluation Metrics

Almost all machine learning classification models are optimized for classification accuracy, via a cross entropy surrogate. However, most often the real objective one cares about is different and can be precision at a fixed recall, recall at a fixed precision, precision-recall AUC, ROC AUC or similar metrics. These metrics are non-decomposable (because they depend on how the model classifies the dataset as a whole and do not decouple across data-points), and have no derivatives so they cannot be directly used in training.

Towards bridging the gap between the training objective and evaluation metric we derived simple, differentiable surrogates for these metrics that rely on convex relaxations and a bit of optimization.

Models trained with our loss functions are used by dozen of teams across Google, improving image classification, spam and phishing detection and other products.

Scalable Learning of Non-Decomposable Objectives. Elad Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Ryan Rifkin, Gal Elidan. AISTATS 2017. [poster, slides, more]

In a follow up work we used quantile estimation and provided simple and effective loss functions for precision at k and related metrics.

Constrained Classification and Ranking via Quantiles. Alan Mackey, Xiyang Luo, Elad Eban. arxiv 2018


Memory Consolidation

I was fortunate to get to work again with Gideon Rothschild, contributing to the analysis of connections between ripple events and memory consolidation (in actual brains, not deep networks).

A cortical–hippocampal–cortical loop of information processing during memory consolidation. Gideon Rothschild, Elad Eban, Loren M Frank. Nature Neuroscience 2017.

Prehistory

Interactive Proofs for Quantum Computations

I was fortunate to attend a talk by Guy Rothblum, which inspired Dorit, Michael and myself to ask if universal quantum computing (BQP) has a zero-knowledge proof for a classical verifier.

Interactive Proofs For Quantum Computations. Dorit Aharonov, Michael Ben-Or, Elad Eban. ICS 2008.

Fun fact: This work shared the $25 Aaronson Prize (for a total of $12).

Quantum braids and polynomial

During my Ms.C I was exposed to a complex and wonderful world of knot invariances, graph polynomials and quantum computation. Working with Dorit and Itai I managed to (re) discover some non-unitary representation of the braid group which were a small part of the 86-page long manuscript.

Polynomial quantum algorithms for additive approximations of the Potts model and other points of the Tutte plane. Dorit Aharonov, Itai Arad, Elad Eban, Zeph Landau 2007.

Fun fact: Over the years I learned first how to prepare braids for bread, then hair, and finally (2020) I'm going for wood.