Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Superconcentrators of Density 25.3

Published in Ars Combinatoria 2018

Superconcentrators are graphs that have good connectivity without having too many edges. People like them more when they have fewer edges. We found a construction that makes them a tiny bit smaller than before.

The Complexity of General-Valued CSPs

Published in FOCS 2015, SIAM Journal on Computing 2016

Somewhat heavy theory. A strong result that concludes complexity classification of a large class of discrete optimization problems.

Total variation on a tree

Published in SIAM Journal on Imaging Sciences 2016

Classical approach to image restoration reduces to efficiently solving combinatorial problems on tree structures. We solved them very efficiently.

Even Delta-Matroids and the Complexity of Planar Boolean CSPs

Published in SODA 2017, journal version in ACM Transactions on Algorithms, 2018

Matching on general graphs can be generalized in terms of matroids. This has connections to a planar version of constraint satisfaction problems. We developed a nontrivial generalization of the Blossom algorithm that solves a large class of such problems.

Efficient Optimization for Rank-based Loss Functions

Published in CVPR 2018 (oral), HM best paper award

Direct optimization of rank-based metrics can be done via structured SVM framework. But then there is a nasty algorithmical subproblem to deal with. We dealt with it in optimal runtime.

L4: Practical loss-based stepsize adaptation for deep learning

Published in NIPS 2018

Is Adam really the best we can do? An simple enough update rule can dramatically outperform Adam on some datasets. The optimizer turned out not to be very robust but it had its moments such as actually driving the training loss on MNIST to 0.0 in 20 epochs.

Differentiation of Blackbox Combinatorial Solvers

Published in ICLR 2020 (spotlight)

Embed blackbox combinatorial solvers into neural networks without any sacrifices! How to turn c++ code for solving e.g. travelling salesman into a differentiable NN building block. Mostly theoretical work with update rule resembling classical SVM-based methods but revamped to make good theoretical sense for deep learning. The start of #blackboxbackprop.

Optimizing Rank-based Metrics with Blackbox Differentiation

Published in CVPR 2020 (oral) nomination for best paper award

#blackboxbackprop allows to differentiate through the ranking function out of the box. This, along with a few other tricks, allows for efficient optimization of rank-based metrics. With a minimal implementation overhead, we obtain competitive results on metric learning benchmarks and on object detection.

talks

teaching