SMALL 2011 London Workshop
"We would like to thank all the participants, panelists, speakers and posters authors for making the SMALL Workshop on Sparse Dictionary Learning an enjoyable scientific event of high quality. " -The SMALL organization team
UPDATES:
-The slides and videos of the presentations are available on the dedicated webpage.
-The posters of the workshop are available in the Posters section.
-You will find shots of the event in the Pictures section.
The SMALL project members are organizing a two-day workshop at Queen Mary University of London in January 2011, during which challenging on-going questions about dictionary learning will be explored.
- Introduction
- Pictures
- Important Dates
- Programme
- List of delegates
- Invited speakers
- Posters
- Catering information
- Arrival in London
- Venue Information
- Getting around in London
- Hotels
- Contact Information
Sparse signal models have been successfully applied by the signal processing
community to a variety of applications. They rely on the selection of a sparsifying dictionary that matches the signal characteristics, and thus yields a sparse representation. While the dictionary is often selected from a pool of pre-defined dictionaries, e.g. wavelets and DCT, this does not always exploit the properties of the signal. An alternative is to learn the sparsifying dictionary from the data. This flexible approach to sparse modelling has the potential of yielding efficient methods that scale well with dimension, and incorporate knowledge about the nature and structure of the data.
In this workshop, we aim to present existing work in dictionary leaning methods, discuss how they address the need to develop fast and structured algorithms, and consider future directions for this exciting and challenging field.
The SMALL Workshop on Sparse Dictionary Learning is co-sponsored by the EPSRC-network INSPIRE as well as the European Comission under FET-Open grant number: 225913.
The archive containing the photos from the workshop can de downloaded here:
Registration |
Closed |
Workshop | 6&7 January 2011 |
Details on the presentations can be found on the Invited Speakers section
Day 1: Thursday 6 January 2011 | |||
09:30 | Tea & Coffee - Registration | ||
10:00 | Mark Plumbley – Welcome and Opening Remarks | ||
10:15 | Francis Bach (INRIA-ENS)
| ||
11:00 | Tea & Coffee | ||
11:30 | Mike Davies (University of Edinburgh)
Rank Aware Algorithms for Joint Sparse Recovery | ||
12:15 | Holger Rauhut (University of Bonn) | ||
13:00 | Lunch & Posters | ||
14:30 | Pierre Vandergheynst (EPFL)
| ||
15:15 | Tea & Coffee | ||
15:45 | Karin Schnass (RICAM)
| ||
16:30 | Round up & Discussion | ||
17:00 | Retire to local pub, followed by dinner | ||
19:30 | Dinner at Bengal Village, more information in the Catering Information section | ||
Day 2: Friday 7 January 2011 | |||
09:30 | Tea & Coffee | ||
10:00 | John Wright (Microsoft Research Asia)
| ||
10:45 | Michael Elad (Technion) | ||
11:30 | Tea & Coffee | ||
12:00 | Kjersti Engan (University of Stavanger )
| ||
12:45 | Lunch & Posters | ||
14:15 | Future Directions - Discussion Panel: Chair: Mark Plumbley | ||
15:15 | Round up & Discussion | ||
16:00 | Finish | ||
Continue discussion in local pub (optional) |
The list of the people who attended the workshop is available here:
All the talks of the workshop have been filmed and are accesible here:
Videos of the presentations
The slides of the presentations are also accessible here:Slides of the presentation
Francis Bach (INRIA-ENS)
Structured Sparsity-inducing Norms through Submodular Functions
Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovasz extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.
John Wright (Microsoft Research Asia)
Dictionary Learning via L1 Minimization: Well-posedness and Correct Recovery
We discuss the problem of decomposing a given data matrix as a product of some unknown basis and a matrix of sparse coefficients. This problem finds applications in source separation, and is also attracting significant interest in the sparse representation community. We discuss circumstances under which this problem is well-posed, as well as situations under which it can be solved by efficient algorithms. In particular, we show that under natural assumptions, the problem is indeed globally well-posed even with fairly small sets of observations (size proportional to the size of the dictionary to be recovered times a logarithmic oversampling factor). We further show that under similar circumstances, L1 norm minimization locally recovers the desired solution. This extends results of Gribonval and Schnass to the case of nonsquare dictionaries. We discuss circumstances under which this recovery might actually be expected to be globally optimal, and give discuss several examples applications in hyperspectral imaging.
Holger Rauhut (University of Bonn)
Compressive Sensing and Structured Random Matrices
Compressive sensing is a recent paradigm in signal processing and sampling theory that predicts that sparse signals can be recovered from a small number of linear and non-adaptive measurements using convex optimization or greedy algorithms. Quite remarkably, all good constructions of the so called measurement matrix known so far are based on randomness. While Gaussian random matrices provide optimal recovery guarantees, such unstructured matrices are of limited use in applications. Indeed, structure often allows to have fast matrix vector multiplies. This is crucial in order to speed up recovery algorithms and to deal with large scale problems. The talk discusses models of structured random matrices that are useful in certain applications, and presents corresponding recovery guarantees. An important type of structured random matrix arises in connection with sampling sparse expansions in terms of bounded orthogonal systems (such as the Fourier system). The second type of structured random matrices to be discussed are partial random circulant matrices, that is, from convolution. In particular, we present recent results with J. Romberg and J. Tropp on the restricted isometry property of such matrices.
Kjersti Engan (University of Stavanger )
Dictionary Learning by RLS-DLA with Applications
The Recursive Least Squares Dictionary Learning Algorithm (RLS-DLA) is an online/adaptive approach, i.e. it updates the dictionary for every new training vector. Including a forgetting factor in the algorithm we get a search-then-converge scheme, much less dependent on an initial dictionary compared to some other dictionary learning algorithms. This talk will give a presentation of the algorithm, and some of the properties of the algorithm will be discussed.
Experiments on texture classification using Frame Texture Classification Method (FTCM) with dictionaries learned by RLS-DLA will be presented. We will also show experiments from other applications like compression and denoising of images.
Karin Schnass (RICAM)
Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation
The talk will be about the problem of learning a dictionary providing sparse representations for a given signal class, via l1-minimisation. The problem can also be seen as factorising a dxN matrix Y=(y_1.... y_N), y_n in R^d, of training signals into a dxK dictionary matrix D and a KxN coefficient matrix X=(x_1....x_N), x_n in R^K, which is sparse. The exact question studied here is when a dictionary coefficient pair (D,X) can be recovered as local minimum of a (nonconvex) l1-criterion with input Y=DX. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e. N ~ C K log K, in contrast to previous approaches requiring combinatorially many samples.
Mike Davies (University of Edinburgh)
Rank Aware Algorithms for Joint Sparse Recovery
This talk will revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation also has links with work on direction of arrival estimation in the field of array signal. Inspired by these links, we consider a new family of MMV algorithms based on the well-know MUSIC method in array processing highlighting the role of the rank of the unknown signal matrix in determining the difficulty of the recovery problem. We will show that the recovery conditions for our new algorithms take into account the observed rank of the signal matrix. This is in contrast to previously proposed techniques such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques which we show to be effectively blind to the rank information. Numerical simulations demonstrate that our new rank aware techniques are significantly better than existing methods in dealing with multiple measurements.
Michael Elad (Technion)
Exploiting Statistical Dependencies in Sparse Representations
In the commonly used sparse representation modeling, the atoms are assumed to be independent of each other when forming the signal. In this talk we shall introduce a statistical model called Boltzman Machine (BM) that enables such dependencies to be taken into account. Adopting a Bayesian point of view, we first treat the pursuit problem - given a signal, and assuming that the model parameters and the dictionary are known, find its sparse representation. We derive the exact MAP estimation, and show that just like in the independent case, this leads to an exponential search problem. We derive two algorithms for its evaluation: a greedy approximation approach for the general case, and an exact estimation that corresponds to a unitary dictionary and banded interaction matrix. We also consider the estimation of the model parameters, learning these parameters directly from training data. We show that given the signals' representations, this problem can be posed as a convex optimization task by using the Maximum Pseudo-Likelihood (MPL).
Pierre Vandergheynst(EPFL)
Transductive Learning with Spectral Graph Wavelets
In this talk I will quickly review the construction and properties of frames of spectral graph wavelets for functions defined on the vertices of a weighted undirected graph. I then leverage the existing state of the art algorithms in sparse recovery to formulate various versions of transductive learning on graphs. Comparing analysis and synthesis based algorithms reveals that the structure of the frame plays a very important role and that sparsity is nicely connected to a notion of smoothness on graphs. I take these lessons one step further to propose better optimization criteria for this problem and finish with a list of interesting open questions for the SMALL consortium.
The posters are available here:
Posters of the workshop
Poster boards will be available to display your posters in Rehearsal Room 2, just opposite the Arts Lecture Theatre where the talks will be held. This is also the venue for tea/coffee breaks and buffet lunch. Each poster will be on display throughout the two days.
Posters requirements:
- Each poster has to fit on a poster board that is 120 cm wide and 180 cm tall. However, posters should not reach down to the floor as this makes them hard to read. Posters should therefore be no larger than A0 portrait or A1 landscape.
- IMPORTANT: Posters wider than the stated dimensions will not fit on the poster boards and cannot be displayed. A0 landscape is TOO WIDE.
- Authors are responsible for putting up and taking down their own posters. Authors are encouraged to put up their posters at the beginning of the first day. You must then remove your poster by 17:00 on the second day.
Posters abstracts:
Andrew Thompson and Coralia Cartis (University of Edinburgh)
A new recovery analysis of Iterative Hard Thresholding
Compressed Sensing (CS) seeks to recover sparse or compressible signals from undersampled linear measurements - it assets that the number of measurements should be proportional to the information content of the signal, rather than its dimension.. One particular CS problem that has received much attention recently is that of recovering a k-sparse signal x ∈ IRN from n linear measurements, where k ≤ n ≤ N. Since the introduction of CS in 2004, many algorithms have been developed to solve this problem.. Because of the paradoxical nature of CS - exact reconstruction from undersampled measurements - it is crucial for the acceptance of an algorithm that rigorous worst-case analysis verifies the degree of undersampling the algorithm permits. This aim can be accomplished by means of the phase transition frame work in which we let (k, n, N) → ∞, while preserving the proportions δ = n/N and ρ = k/n.
We provide a new worst-case analysis for one of these recovery algorithms, Iterative Hard Thresholding (IHT). For the specific case of Gaussian measurement matrices, comparison with existing results by means of the phase t ransition framework shows a substantial quantitative improvement. We derive two conditions for general measurement matrices: Firstly, by analysing the fixed points of IHT we obtain a condition guaranteeing at most one fixed point (namely the original signal). Secondly, we give an improved condition guaranteeing convergence to some fixed point. If both conditions are satisfied, it follows that we have guaranteed recovery of the original signal. A statistical analysis of the fixed point condition for Gaussian measurement matrices allows us to derive a quantitative phase transition.
We also extend the consideration to variants of IHT with variable step-size, including Normalized Iterative Hard Thresholding (NIHT). A similar analysis in this case yields a further significant improvement on the phase transitionfor Gaussian measurement matrices.
Felix Krahmer and Rachel Ward (University of Bonn)
New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property
Consider an m x N matrix satisfying the (k, δk)-RIP with randomized column signs and an arbitrary set E of O(ek) points in RN. We show that with high probability, such a matrix with randomized column signs maps E into Rm without distorting the distance between any two points by more than a factor of 1±4δk. Consequently, matrices satisfying the Restricted Isometry of optimal order provide optimal Johnson-Lindenstrauss embeddings up to a logarithmic factor in N. Moreover, our results yield the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices. In particular, for partial Fourier and partial Hadamard matrices, our method optimizes the dependence of m on the distortion δ: We improve the recent bound m = O(δ-4 log(p) log4(N)) of Ailon and Liberty (2010) to m = O(δ-2 log(p) log4(N)), which is optimal up to the logarithmic factors in N. Our results also yield new Johnson-Lindenstrauss bounds for several deterministic matrices with randomized column signs.
As a consequence, most measurement matrices suitable for sparse signals with respect to an orthogonal basis will also work for redundant and coherent dictionaries.
Mohammad Golbabaee, Simon Arberet, Pierre Vandergheynst (Ecole Polytechnique Fédérale de Lausanne)
Distributed Compressed Sensing of Hyperspectral images via Blind Source Separation
This paper describes a novel framework for compressive sampling (CS) of multichannel signals that are highly dependent across the channels. In this work, we assume few number of sources are generating the multichannel observations based on a linear mixture model. Moreover, sources are assumed to have sparse/compressible representations in some orthonormal basis. The main contribution of this paper lies in 1) rephrasing the compressed sampling of multichannel data as a compressive blind source separation problem, and 2) proposing an optimization problem and a recovery algorithm to estimate both the sources and the mixing matrix (and thus the whole data) from the compressed measures. A number of experiments on the acquisition of Hyperspectral images show that our proposed algorithm obtains a reconstruction error by between 10 dB and 15 dB less than other state-of-the-art CS methods.
Automatic transcription using structured sparse non-negative dictionary learning
Much recent research on automatic music transcription has been done using NMF methods. Baseline NMF has been extended through the use of different constraints and information measures. Furthermore, in the Bayesian NMF framework, priors have been introduced to enhance both the structure of individual atoms and their time supports. For similar purpose, we propose a non-negative sparse dictionary learning method. Building on the NN-K-SVD, the learning is constrained to produce harmonic atoms which persist in the time domain by incorporating ideas from structured sparse algorithms. We describe this algorithm, and present the results and findings from automatic music transcription experiments performed on a set of midi-aligned piano pieces from the MAPS database.
Gilles Chardon, Laurent Daudet (Institut Langevin ESPCI), Nancy Bertin, Rémi Gribonval (INRIA Rennes), Antoine Peillot and François Ollivier (IJLRA UPMC)
Compressive nearfield acoustical holography
Nearfield acoustical holography (NAH) is a popular measurement method aiming at imaging acoustic sources from measurements of the radiated pressure
via wave inversion. As an inverse problem, it needs the application of a regularization method to yield physical results.
In the case of the measurement of the vibrations of a plate, sparse priors on its modal shapes can be derived from the physical model of the movement of the plate.
Using these priors in the regularization gives better reconstructions and allows a reduction of the number of measurements without degradation of the results.
We propose two recovery algorithms : a straighforward application of compressive sensing algorithms to our problem, and an algorithm which uses the structure of the sparsity of the modal shapes of the plate.
Lunches, coffees and dinner are offered for all the participants to the workshop.
Arrival in London
Looking for travelling information in London? Visit the Getting around London section
From: London Heathrow Airport (LHR)
Faster route: Heathrow Express to Paddington, then by Bakerloo Undergound Line southbound to Oxford Circus. Change onto the Central Line to Mile End.
Journey time: about one hour (5 mins longer from Terminal 4).
Cheaper route: Piccadilly Underground Line to Central London, change at Holborn onto the Central Line to Mile End.
Journey time: about 90 mins.
From: London Gatwick Airport (LGW)
Train (First Capital Connect, towards Bedford Rail Station) to Farringdon Underground Station, then take the Hammersmith & City Line to Stepney Green.
Journey time: about 70 mins.
Train (Southern or Gatwick Express) to Victoria, then by District line to Stepney Green. Journey time: about 80 mins.
From: London Stansted Airport (STN)
Train to Liverpool Street, then by Central Line to Mile End.
Time: about 70 mins.
From: London City Airport (LCY)
Public transport: Take the Docklands Light Railway (DLR) to Canning Town. Catch a Jubilee Underground Line to West Ham, then the District Line to Mile End.
Journey time: about 40 mins.
Taxi: Queen Mary is about 5 miles away from London City Airport. For a faster journey you may wish to consider getting a Taxi ("black cab"). The fare to Queen Mary will probably be about £15-£20 (or £25-£30 to Central London).
From: London Luton Airport (LTN)
Shuttle bus to Luton Airport Parkway station, then First Capital Connect Rail (Thameslink route) to Farringdon. Change to Hammersmith & City Underground Line to Stepney Green or Whitechapel. If to Whitechapel, change to District Line for one stop to Stepney Green.
Journey time: about 90 mins.
From: St Pancreas International Station (Eurostar)
Hammersmith & City Line eastbound to Stepney Green.
Journey time: about 40 mins.
The Event will take place at the Arts Lecture Theatre, Queen Mary University of London, Mile End Road, London E1 4NS.
The venue is easily accessible by public transport. It is within a five minute walk of both Mile End Underground station (Central, District, and Hammersmith & City lines) and Stepney Green Underground station (District, and Hammersmith & City lines).
The workshop will be held in building number 29.
For travel information, see [opens in new window]:
- Mile End Campus: Travel Information
- Interactive map showing the venue (multimap.com)
- Map of the campus
Plan Your Journey
The Transport for London; Journey Planner website provides detailed travel options between locations in London. Information includes how long it is likely to take, maps of the start, finish, and any interchanges, and even any stairs or escalators to expect along the way.
For travel to Queen Mary, Mile End Campus use:
Postcode: E1 4NS
The Tube
The simplest way of getting around in London is by Underground ("The Tube"). It is much cheaper to use an Oyster Card rather than paying by cash.
When travelling on London's public transport system, we recommend using an electronic Oyster card. This is cheaper and more convenient than conventional paper tickets bought with cash. Oyster cards can be purchased at most Tube stations. A refundable £3 deposit is usually required.
The Oyster card can be charged with money when you purchase it, or anytime afterwards. Each time you travel, the fare is deducted from the balance of the card. This is called pay-as-you-go. As long as you maintain a positive balance on the card, you can travel wherever you like on the Oyster system.
You can also buy a pre-pay Oyster card over the web before you arrive, which can be delivered directly to you before you travel to London. For more information on this see: Visit London: Oyster Cards (look for "International Visitors to London" at the bottom of the page).
You can use the Oyster card to pay travel fares on:
London Underground ("The Tube")
Buses
Docklands Light Railway ("DLR")
Oyster can also be used on tram services and many overland rail services within London (but some are excluded: check before you travel).
Suggested hotels for staying before or after the workshop:
- Ibis Hotel, London Stratford (about 30 mins from the venue)
- St Giles Hotel, Central London (about 35 mins from the venue)
- Hotel Ibis London City
- Days Hotel, Shoreditch
Maria Jafari
Centre for Digital Music
Queen Mary University of London
Mile End Road, London E1 4NS, UK
Tel: +44 (0)20 7882 7986 begin_of_the_skype_highlighting +44 (0)20 7882 7986 end_of_the_skype_highlighting
Fax: +44 (0)20 7882 7997