Explaining Blended Matching Pursuit: A Multi-Purpose AI Algorithm

By Cyrille Combettes

This is an informal summary of our recent paper Blended Matching Pursuit with my advisor Sebastian Pokutta. It will be presented at the Conference on Neural Information Processing Systems (NeurIPS) in Vancouver, British Columbia, Dec. 8-14, 2019. In this post, we motivate and explain the main ideas behind the design of our algorithm.


Most applications in machine learning involve the minimization of a loss function at the centerpiece of their problem, whether it be in, e.g., market prediction (helping banks make better decisions), recommender systems (a streaming app finding new music for you), or object detection (detecting breast cancer from mammograms). To this end, we address the problem of minimizing a convex function f using linear combinations of points from a fixed set D, which is specific to the given problem. We refer to these points as Screen Shot 2019-11-15 at 12.17.25 PM and they represent the parametrization of our problem. Hence, we are interested in finding an ε-approximate minimizer of f that can be expressed with as little atoms as possible. When such a small representation is available, it tells us which atoms are most important to our problem. This property, known as sparsity, is essential in many machine learning applications for its numerous benefits, including higher interpretability and better performance. We summarize our general setup in Problem 1.

Screen Shot 2019-11-04 at 8.28.51 AM

1 In typical machine learning applications, the atoms are often called the features of the problem.

Matching Pursuit Algorithms

The most popular method for minimizing a convex function is the gradient descent algorithm, which origins can be traced back to Cauchy [1847]. At each iteration, it descends in the optimal direction Screen Shot 2019-11-15 at 12.18.42 PM and performs the update Screen Shot 2019-11-15 at 12.19.13 PM, where Screen Shot 2019-11-15 at 12.19.19 PM is a step size. However, this procedure potentially yields very poor sparsity, as Screen Shot 2019-11-15 at 12.20.17 PM may be a combination of many atoms if it is the case for Screen Shot 2019-11-15 at 12.20.53 PM, even if Screen Shot 2019-11-15 at 12.20.58 PM is sparse. Hence, in order to ensure good sparsity in the solution, the formulation of the problem is often made more complex by using sparsity-inducing constraints (e.g., the lasso), which can require the tedious tuning of hyper-parameters.

In the signal processing community, a popular method that avoids such a reformulation by inherently producing a sparse solution is the Matching Pursuit algorithm [Mallat and Zhang, 1993]. It starts from an arbitrary extremely sparse point Screen Shot 2019-11-15 at 12.22.07 PM (e.g., an atom) and sequentially adds one atom Screen Shot 2019-11-15 at 12.22.13 PM to Screen Shot 2019-11-15 at 12.20.58 PM to form Screen Shot 2019-11-15 at 12.22.21 PM until it becomes a sufficiently good quality approximate solution. Thus, after T iterations at most T atoms have been added, which is very significant in high-dimensional spaces. In recent work, Locatello et al. [2017] proposed the Generalized Matching Pursuit algorithm (GMP) and the Orthogonal Matching Pursuit algorithm (OMP) to solve Problem 1. These algorithms are presented in Algorithm 1, where Line 3 shows that Screen Shot 2019-11-15 at 12.22.13 PM is selected so as to maximize the alignment with the negative gradient, hereby mimicking gradient descent, however preserving sparsity since only 1 atom is added in a given iteration. The set Screen Shot 2019-11-15 at 12.24.30 PM is the set of atoms collected in Screen Shot 2019-11-15 at 12.20.58 PM and its cardinality is the sparsity of Screen Shot 2019-11-15 at 12.20.58 PM.

Screen Shot 2019-11-01 at 1.42.05 PM

Below we provide an illustration of the GMP procedure.

Screen Shot 2019-11-01 at 1.41.57 PM.png

GMP performs the update Screen Shot 2019-11-15 at 12.26.03 PM where Screen Shot 2019-11-15 at 12.26.10 PM is obtained by line search (Line 5 in Algorithm 1). However, this does not prevent GMP to select atoms that have already been added in previous iterations or that are redundant. This is resolved in OMP, which reoptimizes f at each iteration over the linear subspace spanned by all the selected atoms (Line 6 in Algorithm 1), but this is very expensive. Thus, GMP and OMP have very opposite behaviors as GMP converges much faster while OMP produces iterates with much higher sparsity. That is, GMP finds a point Screen Shot 2019-11-15 at 12.26.23 PM satisfying Screen Shot 2019-11-15 at 12.26.36 PM faster (in time) than OMP, but the point that OMP will end up finding has a much smaller m. Therefore, it is not possible to enjoy both properties of speed and sparsity simultaneously, and, in practice, we must trade one for the other.

Blended Matching Pursuit

In our paper, we unify the best of both algorithms and design a Blended Matching Pursuit algorithm (BMP). Noting that an OMP iteration is actually a sequence of projected gradient steps (PG), i.e., a sequence of gradient descent steps with projections back onto Screen Shot 2019-11-15 at 12.28.33 PM, our key observation is that this sequence is overkill and a sweet spot exists where it can be truncated and replaced with one GMP step. After some number of PG steps, additional PG steps will not decrease the function value significatively and we can afford to add 1 atom and take a GMP step. Essentially, by proceeding this way one can achieve the same sparsity as in OMP while converging much faster. This idea is at the core of the design of our BMP algorithm. 

In order to combine the PG and GMP steps in a smooth and efficient manner, we define the dual gap estimates Screen Shot 2019-11-15 at 12.29.13 PM. Since we want to favor PG steps as they do not add a new atom and hence preserve the sparsity of the iterates, the dual gap estimates monitor the blending of steps by establishing at each iteration a threshold on the progress required to take a PG step: BMP takes a PG step if its progress guarantee satisfies this requirement, else BMP takes a GMP step. We refer the interested reader to our paper for technical details.

We establish convergence results for BMP and we validate in experiments that BMP converges faster than GMP while producing iterates with sparsity equivalent to that of OMP. This is of fundamental interest for practitioners. In particular, we establish linear convergence rate Screen Shot 2019-11-15 at 12.30.10 PM for a wide range of non-strongly convex functions via a sharpness property around the set of minimizers of f. Interestingly, most well-behaved convex functions satisfy this property [Bolte et al., 2007], which subsumes that of strong convexity. We further compare BMP to state-of-the-art algorithms, including Conditional Gradient with Enhancement and Truncation (CoGEnT) [Rao et al., 2015], Blended Conditional Gradients (BCG) [Braun et al., 2019], and Accelerated Matching Pursuit (accMP) [Locatello et al., 2018]. The results are identical, namely, that BMP has the fastest speed of convergence while its sparsity is close-to-optimal, with the optimal sparsity being that of OMP.

Computational Experiments

To conclude this post, we present two computational experiments. The first one compares BMP vs. GMP, OMP, BCG, and CoGEnT on the minimization of an arbitrarily chosen function Screen Shot 2019-11-15 at 12.30.55 PM over the set of canonical vectors Screen Shot 2019-11-15 at 12.31.27 PM, where Screen Shot 2019-11-15 at 12.31.36 PM. The plot on the left shows the convergence speed in number of iterations, the plot in the middle shows the convergence speed in time, and the plot on the right shows the convergence speed in number of atoms, i.e., the sparsity of the algorithm. We can see in the middle plot that GMP (blue) converges faster than OMP (green) while the plot on the right shows that OMP achieves higher sparsity, as described earlier. BMP (orange) converges the fastest in time and achieves sparsity equivalent to that of OMP.

Screen Shot 2019-11-01 at 1.41.46 PM.png

2 Strong convexity is a standard requirement to obtain linear convergence rates, however this considerably restricts the set of candidate functions. For BMP, it is possible to obtain these rates without the strong convexity requirement.

The second experiment compares BMP vs. accMP on an experiment from Locatello et al. [2018], consisting in minimizing Screen Shot 2019-11-15 at 12.32.38 PM over a random dictionary D of 200 atoms. Our code framework is consistent with their implementation.

Screen Shot 2019-11-01 at 1.41.39 PM.png


J. Bolte, A. Daniilidis, and A. Lewis. The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM Journal on Optimization, 17(4):1205‒1223, 2007.

G. Braun, S. Pokutta, D. Tu, and S. Wright. Blended conditional gradients: the unconditioning of conditional gradients. In Proceedings of the 36th International Conference on Machine Learning, pages 735‒743, 2019.

A. Cauchy. Méthode générale pour la résolution des systèmes d’équations simultanées.  In Comptes rendus hebdomadaires des séances de l’Académie des sciences, pages 536‒538, 1847.

C. W. Combettes and S. Pokutta. Blended matching pursuit. In Advances in Neural Information Processing Systems 32, 2019. To appear.

F. Locatello, R. Khanna, M. Tschannen, and M. Jaggi. A unified optimization view on generalized matching pursuit and Frank-Wolfe. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 860–868, 2017.

F. Locatello, A. Raj, S. P. Karimireddy, G. Rätsch, B. Schölkopf, S. U. Stich, and M. Jaggi. On matching pursuit and coordinate descent. In Proceedings of the 35th International Conference on Machine Learning, pages 3198‒3207, 2018.

S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397‒3415, 1993.

N. Rao, S. Shah, and S. Wright. Forward-backward greedy algorithms for atomic norm regularization. IEEE Transactions on Signal Processing, 63(21):5798‒5811, 2015.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.