Personal tools

SMALL 2011 London Workshop

small london.png

"We would like to thank all the participants, panelists, speakers and posters authors for making the SMALL Workshop on Sparse Dictionary Learning an enjoyable scientific event of high quality. " -The SMALL organization team



-The slides and videos of the presentations are available on the dedicated webpage.

-The posters of the workshop are available in the Posters section.

-You will find shots of the event in the Pictures section.

The SMALL project members are organizing a two-day workshop at Queen Mary University of London in January 2011, during which challenging on-going questions about dictionary learning will be explored.



Sparse signal models have been successfully applied by the signal processing
community to a variety of applications. They rely on the selection of a sparsifying dictionary that matches the signal characteristics, and thus yields a sparse representation. While the dictionary is often  selected from a pool of pre-defined dictionaries, e.g. wavelets and DCT, this does not always exploit the properties of the signal. An alternative is to learn the sparsifying dictionary from the data. This flexible approach to sparse modelling has the potential of yielding efficient methods that scale well with dimension, and incorporate knowledge about the nature and structure of the data.

 In this workshop, we aim to present existing work in dictionary leaning methods, discuss how they address the need to develop fast and structured algorithms, and consider future directions for this exciting and challenging field.


The SMALL Workshop on Sparse Dictionary Learning is co-sponsored by the EPSRC-network INSPIRE as well as the European Comission under FET-Open grant number: 225913.



The archive containing the photos from the workshop can de downloaded here:

Photos from Workshop

Important Dates


 19 November 2010

 3rd December 2010


Workshop  6&7 January 2011



Details on the presentations can be found on the Invited Speakers section


Day 1: Thursday 6 January 2011


Tea & Coffee - Registration


Mark Plumbley – Welcome and Opening Remarks


Francis Bach (INRIA-ENS) 

Structured Sparsity-inducing Norms through Submodular Functions


Tea & Coffee


Mike Davies (University of Edinburgh) 

Rank Aware Algorithms for Joint Sparse Recovery


Holger Rauhut (University of Bonn)
Compressive Sensing and Structured Random Matrices


Lunch & Posters


Pierre Vandergheynst (EPFL) 

Transductive Learning with Spectral Graph Wavelets


Tea & Coffee


Karin Schnass (RICAM) 

Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation


Round up & Discussion


Retire to local pub, followed by dinner


Dinner at Bengal Village, more information in the Catering Information section

Day 2: Friday 7 January 2011


Tea & Coffee


John Wright (Microsoft Research Asia) 

Dictionary Learning via L1 Minimization: Well-posedness and Correct Recovery


Michael Elad (Technion)
Exploiting Statistical Dependencies in Sparse Representations


Tea & Coffee


Kjersti Engan (University of Stavanger ) 

Dictionary Learning by RLS-DLA with Applications


Lunch & Posters


Future Directions - Discussion Panel:

Chair: Mark Plumbley
Panelists: Mike Davies, Miki Elad, Kjersti Engan and Remi Grinbonval


Round up & Discussion



Continue discussion in local pub (optional)



List of delegates

The list of the people who attended the workshop is available here:

List of delegates


Invited Speakers

All the talks of the workshop have been filmed and are accesible here:
Videos of the presentations

The slides of the presentations are also accessible here:Slides of the presentation


Francis Bach (INRIA-ENS)
Structured Sparsity-inducing Norms through Submodular Functions
Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovasz extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.

John Wright (Microsoft Research Asia)
Dictionary Learning via L1 Minimization: Well-posedness and Correct Recovery
We discuss the problem of decomposing a given data matrix as a product of some unknown basis and a matrix of sparse coefficients. This problem finds applications in source separation, and is also attracting significant interest in the sparse representation community. We discuss circumstances under which this problem is well-posed, as well as situations under which it can be solved by efficient algorithms. In particular, we show that under natural assumptions, the problem is indeed globally well-posed even with fairly small sets of observations (size proportional to the size of the dictionary to be recovered times a logarithmic oversampling factor). We further show that under similar circumstances, L1 norm minimization locally recovers the desired solution. This extends results of Gribonval and Schnass to the case of nonsquare dictionaries. We discuss circumstances under which this recovery might actually be expected to be globally optimal, and give discuss several examples applications in hyperspectral imaging.

Holger Rauhut (University of Bonn)
Compressive Sensing and Structured Random Matrices
Compressive sensing is a recent paradigm in signal processing and sampling theory that predicts that sparse signals can be recovered from a small number of linear and non-adaptive measurements using convex optimization or greedy algorithms. Quite remarkably, all good constructions of the so called measurement matrix known so far are based on randomness. While Gaussian random matrices provide optimal recovery guarantees, such unstructured matrices are of limited use in applications. Indeed, structure often allows to have fast matrix vector multiplies. This is crucial in order to speed up recovery algorithms and to deal with large scale problems. The talk discusses models of structured random matrices that are useful in certain applications, and presents corresponding recovery guarantees. An important type of structured random matrix arises in connection with sampling sparse expansions in terms of bounded orthogonal systems (such as the Fourier system). The second type of structured random matrices to be discussed are partial random circulant matrices, that is, from convolution. In particular, we present recent results with J. Romberg and J. Tropp on the restricted isometry property of such matrices.

Kjersti Engan (University of Stavanger )
Dictionary Learning by RLS-DLA with Applications
The Recursive Least Squares Dictionary Learning Algorithm (RLS-DLA) is an online/adaptive approach, i.e. it updates the dictionary for every new training vector. Including a forgetting factor in the algorithm we get a search-then-converge scheme, much less dependent on an initial dictionary compared to some other dictionary learning algorithms. This talk will give a presentation of the algorithm, and some of the properties of the algorithm will be discussed.
Experiments on texture classification using Frame Texture Classification Method (FTCM) with dictionaries learned by RLS-DLA will be presented. We will also show experiments from other applications like compression and denoising of images.

Karin Schnass (RICAM)
Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation
The talk will be about the problem of learning a dictionary providing sparse representations for a given signal class, via l1-minimisation. The problem can also be seen as factorising a dxN matrix Y=(y_1.... y_N), y_n in R^d, of training signals into a dxK dictionary matrix D and a KxN coefficient matrix X=(x_1....x_N), x_n in R^K, which is sparse. The exact question studied here is when a dictionary coefficient pair (D,X) can be recovered as local minimum of a (nonconvex) l1-criterion with input Y=DX. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e. N ~ C K log K, in contrast to previous approaches requiring combinatorially many samples.

Mike Davies (University of Edinburgh)
Rank Aware Algorithms for Joint Sparse Recovery
This talk will revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation also has links with work on direction of arrival estimation in the field of array signal. Inspired by these links, we consider a new family of MMV algorithms based on the well-know MUSIC method in array processing highlighting the role of the rank of the unknown signal matrix in determining the difficulty of the recovery problem. We will show that the recovery conditions for our new algorithms take into account the observed rank of the signal matrix. This is in contrast to previously proposed techniques such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques which we show to be effectively blind to the rank information. Numerical simulations demonstrate that our new rank aware techniques are significantly better than existing methods in dealing with multiple measurements.

Michael Elad (Technion)
Exploiting Statistical Dependencies in Sparse Representations
In the commonly used sparse representation modeling, the atoms are assumed to be independent of each other when forming the signal. In this talk we shall introduce a statistical model called Boltzman Machine (BM) that enables such dependencies to be taken into account. Adopting a Bayesian point of view, we first treat the pursuit problem - given a signal, and assuming that the model parameters and the dictionary are known, find its sparse representation. We derive the exact MAP estimation, and show that just like in the independent case, this leads to an exponential search problem. We derive two algorithms for its evaluation: a greedy approximation approach for the general case, and an exact estimation that corresponds to a unitary dictionary and banded interaction matrix. We also consider the estimation of the model parameters, learning these parameters directly from training data. We show that given the signals' representations, this problem can be posed as a convex optimization task by using the Maximum Pseudo-Likelihood (MPL).

Pierre Vandergheynst(EPFL)
Transductive Learning with Spectral Graph Wavelets
In this talk I will quickly review the construction and properties of frames of spectral graph wavelets for functions defined on the vertices of a weighted undirected graph. I then leverage the existing state of the art algorithms in sparse recovery to formulate various versions of transductive learning on graphs. Comparing analysis and synthesis based algorithms reveals that the structure of the frame plays a very important role and that sparsity is nicely connected to a notion of smoothness on graphs. I take these lessons one step further to propose better optimization criteria for this problem and finish with a list of interesting open questions for the SMALL consortium.


The posters are available here:
Posters of the workshop

Poster boards will be available to display your posters in Rehearsal Room 2, just opposite the Arts Lecture Theatre where the talks will be held. This is also the venue for tea/coffee breaks and buffet lunch. Each poster will be on display throughout the two days.

Posters requirements:

  • Each poster has to fit on a poster board that is 120 cm wide and 180 cm tall. However, posters should not reach down to the floor as this makes them hard to read. Posters should therefore be no larger than A0 portrait or A1 landscape.
  • IMPORTANT: Posters wider than the stated dimensions will not fit on the poster boards and cannot be displayed. A0 landscape is TOO WIDE.
  • Authors are responsible for putting up and taking down their own posters. Authors are encouraged to put up their posters at the beginning of the first day. You must then remove your poster by 17:00 on the second day.


Posters abstracts:

Andrew Thompson and Coralia Cartis (University of Edinburgh)
A new recovery analysis of Iterative Hard Thresholding

Compressed Sensing (CS) seeks to recover sparse or compressible signals from undersampled linear measurements - it assets that the number of measurements should be proportional to the information content of the signal, rather than its dimension.. One particular CS problem that has received much attention recently is that of recovering a k-sparse signal x ∈ IRN from n linear measurements, where k ≤ n ≤ N. Since the introduction of CS in 2004, many algorithms have been developed to solve this problem.. Because of the paradoxical nature of CS - exact reconstruction from undersampled measurements - it is crucial for the acceptance of an algorithm that rigorous worst-case analysis verifies the degree of undersampling the algorithm permits. This aim can be accomplished by means of the phase transition frame work in which we let (k, n, N) → ∞, while preserving the proportions δ = n/N and ρ = k/n.
We provide a new worst-case analysis for one of these recovery algorithms, Iterative Hard Thresholding (IHT). For the specific case of Gaussian measurement matrices, comparison with existing results by means of the phase t ransition framework shows a substantial quantitative improvement. We derive two conditions for general measurement matrices: Firstly, by analysing the fixed points of IHT we obtain a condition guaranteeing at most one fixed point (namely the original signal). Secondly, we give an improved condition guaranteeing convergence to some fixed point. If both conditions are satisfied, it follows that we have guaranteed recovery of the original signal. A statistical analysis of the fixed point condition for Gaussian measurement matrices allows us to derive a quantitative phase transition.
We also extend the consideration to variants of IHT with variable step-size, including Normalized Iterative Hard Thresholding (NIHT). A similar analysis in this case yields a further significant improvement on the phase transitionfor Gaussian measurement matrices.

Felix Krahmer and Rachel Ward (University of Bonn)
New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property

Various works in the area of compressed sensing have shown how signals that have only few non-zero coefficients with respect to an orthogonal basis can be recovered from a small number of linear measurements via l1-minimization. Recent work by Candès et al. (2010) generalizes these results to redundant and coherent dictionaries. A crucial tool in their analysis is the Johnson-Lindenstrauss (JL) Lemma. It states that any set of p points in high dimensional Euclidean space can be embedded into O(δ-2 log(p)) dimensions, without distorting the distance between any two points by more than a factor between 1-δ and 1+δ. Such JL embeddings are used to find appropriate linear maps to be used for the measurements. Thus, for practical purposes, fast multiplication properties of JL embeddings are of great importance. In this poster, we present a new connection between the JL Lemma and the Restricted Isometry Property (RIP), a well-known concept in the theory of sparse recovery often used for showing the success of l1-minimization.
Consider an m x N matrix satisfying the (k, δk)-RIP with randomized column signs and an arbitrary set E of O(ek) points in RN. We show that with high probability, such a matrix with randomized column signs maps E into Rm without distorting the distance between any two points by more than a factor of 1±4δk. Consequently, matrices satisfying the Restricted Isometry of optimal order provide optimal Johnson-Lindenstrauss embeddings up to a logarithmic factor in N. Moreover, our results yield the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices. In particular, for partial Fourier and partial Hadamard matrices, our method optimizes the dependence of m on the distortion δ: We improve the recent bound m = O(δ-4 log(p) log4(N)) of Ailon and Liberty (2010) to m = O(δ-2 log(p) log4(N)), which is optimal up to the logarithmic factors in N. Our results also yield new Johnson-Lindenstrauss bounds for several deterministic matrices with randomized column signs.
As a consequence, most measurement matrices suitable for sparse signals with respect to an orthogonal basis will also work for redundant and coherent dictionaries.
This is joint work with Rachel Ward.



Mohammad Golbabaee, Simon Arberet, Pierre Vandergheynst (Ecole Polytechnique Fédérale de Lausanne)
Distributed Compressed Sensing of Hyperspectral images via Blind Source Separation

 This paper describes a novel framework for compressive sampling (CS) of multichannel signals that are highly dependent across the channels. In this work, we assume few number of sources are generating the multichannel observations based on a linear mixture model. Moreover, sources are assumed to have sparse/compressible representations in some orthonormal basis. The main contribution of this paper lies in 1) rephrasing the compressed sampling of multichannel data as a compressive blind source separation problem, and 2) proposing an optimization problem and a recovery algorithm to estimate both the sources and the mixing matrix (and thus the whole data) from the compressed measures. A number of experiments on the acquisition of Hyperspectral images show that our proposed algorithm obtains a reconstruction error by between 10 dB and 15 dB less than other state-of-the-art CS methods.

Lila Hadji (CEA-Saclay), Jean-Luc Starck (CEA-Saclay), J. Fadili (Université de Caen) and G. Peyre (Université Paris-Dauphine)
Astronomical Image Denoising and Dictionary Learning
Astronomical data, whether secured from telescopes, satellites, or other imaging devices,  suffer constantly from the ubiquitous presence of noise. Thus, in order to reduce the noise component and improve the scientific quality of the data to analyze afterwards, a denoising step becomes mandatory before tackling any further astronomical data processing task. Image denoising in astronomy has witnessed a long and rich history throughout the years, and goes back to the beginning of space programs.
During the last decade, a particular denoising scheme, based on sparse modeling and redundant representations, has drawn the interest of a great community of researchers. At first, the harmonics, wavelets, and similar bases and redundant dictionaries have been considered as candidates for projection domains to seek the sparsest decompositions. Then, a new generation of algorithms, based on data-driven dictionaries, started to evolve rapidly, gain more intention, and replace the aforehand selection of dictionaries. While choosing an of-the-shelf dictionary leans on a guess of the most appropriate projective domain, the dictionary learning framework proposes to construct the dictionary upon the data themselves, which yields to a more flexible setup of sparse modeling.
We will present a selection of the most known algorithms for sparse coding and dictionary learning, and we will focus our interest on the performance of astronomical image denoising via dictionary learning and sparse modeling.
Amir Adler (Technion), Valentin Emiya (INRIA, Rennes), Maria G. Jafari (QMUL), Michael Elad (Technion), Remi Gribonval (INRIA, Rennes) and Mark D. Plumbley (QMUL)
Audio inpainting: problem statement, relation with sparse representations and some experiments
We propose a framework called audio inpainting for the general problem of estimating missing samples in audio. It extends the problem of the interpolation and extrapolation of signals to the cases where possibly-large blocks of consecutive samples must be estimated from the remaining, known samples. We relate this framework to a number of applications including declicking, declipping and audio packet loss in voice over IP.
By considering audio inpainting as an inverse problem, we show that sparse representations are an appropriate scheme to develop new approaches for the audio inpainting problem. We will present some experiments, with a particular focus on restoration of clipped speech or music signals.
Eva Dyer and Richard Baraniuk (Rice University)
Learning modular dictionaries from natural images
We consider the problem of learning modular dictionaries from natural images, where we define a modular dictionary as a collection of a finite number of sub-dictionaries that form a union of low-dimensional subspaces embedded in a high-dimensional ambient space. To learn dictionaries of this form we: (1) learn a globally optimal overcomplete dictionary that produces sparse representations of the training data, (2) present a number of approaches for learning graphs which encapsulate the connectivity structure between different atoms in the dictionary, and (3) partition the atoms into sub-dictionaries (modules) by applying normalized graph cuts to each of the resulting connectivity graphs. Following the introduction of this graph-based dictionary learning framework, we present results demonstrating the qualitative differences between the modular dictionaries obtained for each connectivity graph. Furthermore, we demonstrate the advantages of each learning strategy by studying the dimension of the resulting subspaces and the approximation error obtained when we find local approximations to natural images by simply projecting the signal onto each of low-dimensional subspaces spanned by the sub-dictionaries in our learned modular dictionaries.
Daniele Barchiesi and Mark Plumbley (QMUL)
Dictionary Learning of Convolved Signals
Assuming that a set of source signals is sparsely representable in a given dictionary, we show how their sparse recovery fails whenever we can only measure a convolved observation of them. Starting from this motivation, we develop a block coordinate descent method which aims to learn a convolved dictionary and provide a sparse representation of the observed signals with small residual norm. We compare the proposed approach to the K-SVD dictionary learning algorithm and show through numerical experiment on synthetic signals that, provided some conditions on the problem data, our technique converges in a fixed number of iterations to a sparse representation with smaller residual norm. Moreover, we show that our dictionary learning algorithm is often able to identify the impulse response that generated the observed variables, which creates a link between two apparently distant problems such as dictionary learning and blind deconvolution.
Ivan Damnjanovic and Mark D. Plumbley (QMUL)
Sparse Representations and Dictionary Learning Evaluation Framework – SMALLbox
SMALLbox is a new foundational framework for processing signals, using adaptive sparse structured representations. The main aim of SMALLbox is to become a test ground for exploration of new provably good methods to obtain inherently data-driven sparse models, able to cope with large-scale and complicated data. The toolbox provides an easy way to evaluate these methods against state-of-the art alternatives in a variety of standard signal processing problems. This is achieved trough a unifying interface that enables a seamless connection between the three types of modules: problems, dictionary learning algorithms and sparse solvers. In addition, it provides interoperability between existing state-of-the-art toolboxes. As an open source MATLAB toolbox, it can be also seen as a tool for reproducible research in the sparse representations research community.
Michael E. Davies (University of Edinburgh), Michael Elad, Raja Giryes (Technion), Rémi Gribonval , and Sangnam Nam (INRIA, Rennes)
Cosparse Analysis Modeling
In the past decade there has been a great interest in a synthesis-based model for signals, based on sparse and redundant representations. Such a model assumes that the signal of interest can be composed as a linear combination of  few columns from a given matrix (the dictionary).
An alternative analysis-based model can be envisioned, where an analysis operator multiplies the signal, leading to a cosparse outcome. In this work, we study this analysis model, in the context of a generic missing data problem (e.g., compressed sensing, inpainting, source separation, etc.). We investigate the uniqueness problem for this setting and consider some algorithms to solve such problems.
Andrew Nesbit and Mark D. Plumbley (QMUL)
Learning representations of marine mammal vocalisations
The field of sparse representations is a relatively unexplored paradigm for signal-level analysis of marine mammal vocalisations. In this work, we contrast and compare several dictionary learning algorithms to extract salient features, remove noise, and segment and classify such recordings.
Aris Gretsistas and Mark D. Plumbley (QMUL)
Approximate Conjugate Gradient Polytope Faces Pursuit for large scale sparse recovery problems
Conjugate Gradient Polytope Faces Pursuit is a greedy algorithm suitable for large scale sparse recovery problems, as by using directional updates to update the solution vector at each iteration overcomes the memory limitations of the initial algorithm of Polytope Faces Pursuit based on the Cholesky factorization method. More specifically, the algorithm adds one basis vector at a time and adopts a path-following approach based on the geometry of the polar polytope associated with the dual Linear Program. Each time a new atom is added the solution vector is updated using a full conjugate gradient update. In this paper, we propose an algorithm that uses a suboptimal way of updating the solution vector. According to an “approximate” conjugate gradient update step the algorithm does not need to estimate the full conjugate gradient step and subsequently the computational complexity is reduced. Therefore, the so-called Approximate Conjugate Gradient Polytope Faces Pursuit algorithm results in faster convergence but approximate solutions.
Ken O'Hanlon and Mark D. Plumbley (QMUL)
Automatic transcription using structured sparse non-negative dictionary learning

Much recent research on automatic music transcription has been done using NMF methods. Baseline NMF has been extended through the use of different constraints and information measures. Furthermore, in the Bayesian NMF framework, priors have been introduced to enhance both the structure of individual atoms and their time supports. For similar purpose, we propose a non-negative sparse dictionary learning method. Building on the NN-K-SVD, the learning is constrained to produce harmonic atoms which persist in the time domain by incorporating ideas from structured sparse algorithms. We describe this algorithm, and present the results and findings from automatic music transcription experiments performed on a set of midi-aligned piano pieces from the MAPS database.

Gilles Chardon, Laurent Daudet (Institut Langevin ESPCI), Nancy Bertin, Rémi Gribonval (INRIA Rennes),  Antoine Peillot and François Ollivier (IJLRA UPMC)

Compressive nearfield acoustical holography

Nearfield acoustical holography (NAH) is a popular measurement method aiming at imaging acoustic sources from measurements of the radiated pressure
via wave inversion. As an inverse problem, it needs the application of a regularization method to yield physical results.
In the case of the measurement of the vibrations of a plate, sparse priors on its modal shapes can be derived from the physical model of the movement of the plate.
Using these priors in the regularization gives better reconstructions and allows a reduction of the number of measurements without degradation of the results.
We propose two recovery algorithms : a straighforward application of compressive sensing algorithms to our problem, and an algorithm which uses the structure of the sparsity of the modal shapes of the plate.

Catering Information

Lunches, coffees and dinner are offered for all the participants to the workshop.

The banquet on 6 January 2011 will be held at the Bengal Village Restaurant in Brick
Lane, famous for its many curry houses.
You can read a review of Bengal Village at
and learn more about Brick Lane at
From Queen Mary University of London, take the bus 25 to Algate East, and then walk down Brick Lane to the restaurant. This journey takes about 30 minutes. Alternatively, take the District Line at Stepney Green to Algate East.

Arrival in London

Looking for travelling information in London? Visit the Getting around London section

From: London Heathrow Airport (LHR)
Faster route: Heathrow Express to Paddington, then by Bakerloo Undergound Line southbound to Oxford Circus. Change onto the Central Line to Mile End.
 Journey time: about one hour (5 mins longer from Terminal 4).

Cheaper route: Piccadilly Underground Line to Central London, change at Holborn onto the Central Line to Mile End. 
Journey time: about 90 mins.

From: London Gatwick Airport (LGW)
Train (First Capital Connect, towards Bedford Rail Station) to Farringdon Underground Station, then take the Hammersmith & City Line to Stepney Green.
Journey time: about 70 mins.

Train (Southern or Gatwick Express) to Victoria, then by District line to Stepney Green.
Journey time: about 80 mins.

From: London Stansted Airport (STN)
Train to Liverpool Street, then by Central Line to Mile End.
Time: about 70 mins.

From: London City Airport (LCY)
Public transport: Take the Docklands Light Railway (DLR) to Canning Town. Catch a Jubilee Underground Line to West Ham, then the District Line to Mile End.
Journey time: about 40 mins.

Taxi: Queen Mary is about 5 miles away from London City Airport. For a faster journey you may wish to consider getting a Taxi ("black cab"). The fare to Queen Mary will probably be about £15-£20 (or £25-£30 to Central London).

From: London Luton Airport (LTN)
Shuttle bus to Luton Airport Parkway station, then First Capital Connect Rail (Thameslink route) to Farringdon. Change to Hammersmith & City Underground Line to Stepney Green or Whitechapel. If to Whitechapel, change to District Line for one stop to Stepney Green.
Journey time: about 90 mins.

From: St Pancreas International Station (Eurostar)
Hammersmith & City Line eastbound to Stepney Green.
 Journey time: about 40 mins.

Venue Information

The Event will take place at the Arts Lecture Theatre, Queen Mary University of London, Mile End Road, London E1 4NS.

View Larger Map


The venue is easily accessible by public transport. It is within a five minute walk of both Mile End Underground station (Central, District, and Hammersmith & City lines) and Stepney Green Underground station (District, and Hammersmith & City lines).

The workshop will be held in building number 29.

For travel information, see [opens in new window]:


Getting Around in London

Plan Your Journey
The Transport for London; Journey Planner website provides detailed travel options between locations in London. Information includes how long it is likely to take, maps of the start, finish, and any interchanges, and even any stairs or escalators to expect along the way.
For travel to Queen Mary, Mile End Campus use:
Postcode: E1 4NS

The Tube
The simplest way of getting around in London is by Underground ("The Tube"). It is much cheaper to use an Oyster Card rather than paying by cash.

When travelling on London's public transport system, we recommend using an electronic Oyster card. This is cheaper and more convenient than conventional paper tickets bought with cash. Oyster cards can be purchased at most Tube stations. A refundable £3 deposit is usually required.

The Oyster card can be charged with money when you purchase it, or anytime afterwards. Each time you travel, the fare is deducted from the balance of the card. This is called pay-as-you-go. As long as you maintain a positive balance on the card, you can travel wherever you like on the Oyster system.

You can also buy a pre-pay Oyster card over the web before you arrive, which can be delivered directly to you before you travel to London. For more information on this see: Visit London: Oyster Cards (look for "International Visitors to London" at the bottom of the page).
You can use the Oyster card to pay travel fares on:

London Underground ("The Tube")
Docklands Light Railway ("DLR")
Oyster can also be used on tram services and many overland rail services within London (but some are excluded: check before you travel).



Suggested hotels for staying before or after the workshop:


Contact Information

Maria Jafari
Centre for Digital Music
Queen Mary University of London
Mile End Road, London E1 4NS, UK
Tel: +44 (0)20 7882 7986 begin_of_the_skype_highlighting              +44 (0)20 7882 7986      end_of_the_skype_highlighting
Fax: +44 (0)20 7882 7997


Rémi Gribonval, coordinator
Equipe-Projet METISS
INRIA Rennes - Bretagne Atlantique
Campus de Beaulieu
F-35042 Rennes cedex, France.

Phone: (+33/0) 299 842 506
Fax: (+33/0) 299 847 171
E-MAIL: contact