Compressed Sensing is really a simple method for finding the sparsest solution to some underdetermined systems of linear equations.
- What sort of underdetermined systems are allowed?
- How can you find this sparsest solution ?
Those are some of the questions that are being answered in some fashion or another in the following resources below (course note, videos) with varying degrees of difficulty. A second set of questions usually are then asked once some of these first issues are addressed:
- Instead of the sparsest solution, can we find the most compressible solution ?
- etc ...
Eventually, you might be interested in subscribing to the Nuit Blanche feed, There are also a Google+ Community, a CompressiveSensing subreddit, a LinkedIn Compressive Sensing group and a Matrix Factorization that you can join and post questions there. In this page, you will find some expository material aimed at various crowds, pick the one you feel most comfortable with:
- Tutorial and review papers at the Rice Repository site.
- Here is an answer I gave on Quora to the following question; "What is compressed sensing (compressive sampling) in layman's terms?". My answer is here and uses the well known 12 balls weighting problem.
- Similarly, another way to see how compressive sensing work is how it is implemented in hardware: How does the Rice one pixel camera work ? (here is a non-exhaustive list of many other hardware/sensors)
- Here is a presentation is designed for very early undergraduates, it's also a work in progress (constructive criticism is welcome):
- Finally, here is a presentation that might provide some insight to engineers and other learned professionals but who may not be entirely familiar to what compressive sensing is. In particular, we try to avoid a unique reference to the L_1 norm and other deeper (and sometimes too narrow) mathematical statement while favoring the hardware/sensor issues:
- A while back, I created a small video of a clown and a woman talking about compressed sensing, let me know if it helps better understand the subject:
- Three videos presenting compressive sensing by Mark Davenport. It's short and to the point.
More in-depth explanation and teachings are provided below:
In terms of books, for less than $3.00 there is one the Kindle store: It can be read on the Kindle, iPad/iPod Touch and other tablets through the Kindle app:
Courses and Lecture Notes:
- Introduction to Compressive Sensing by Richard Baraniuk, Mark Davenport, Marco Duarte, and Chinmay Hegde.at Connexions.org.
- An Introduction to Compressed Sensing by Mark Davenport, Marco Duarte, Yonina Eldar, Gitta Kutyniok
- Compressive Sensing - An Introduction by Massimo Fornasier and Holger Rauhut.
- Notes on Compressed Sensing by Simon Foucart
The following lectures were given at IAS in Princeton and provide a good introduction to CS and related issues:
- Anna Gilbert's lecture 1 Background on sparse approximation
- Anna Gilbert's lecture 2 Hardness results for sparse approximation problems
- Anna Gilbert's lecture 3 Dictionary geometry, greedy algorithms, and convex relaxation
- Rebecca Willett's lecture 1Methods for sparse analysis of high-dimensional data, I
- Rebecca Willett's lecture 2 Sparsity: Correcting Error in Data
- Rebecca Willett's lecture 3 Sparsity: Compressed Sensing
- Rebecca Willett's lecture 4 Sparsity: Generalized Sparsity Measures and Applications
- Rachel Ward's lecture 1 Methods for sparse analysis of high-dimensional data, II
- Sofya Raskhodnikova's lecture 1 Sublinear-Time Algorithms
- Sofya Raskhodnikova's lecture 2 Sublinear-Time Algorithms
- Sofya Raskhodnikova's lecture 3 Sublinear-Time Algorithms
- Sofya Raskhodnikova's lecture 4 Sublinear-Time Algorithms
- Michael Friedlander has some basic examples in the SPGL1 toolbox but he and also features a long suite of examples with different measurement matrices in the SPARCO toolbox.
- Gabriel Peyre has several walk through examples (tours)
Webpages of courses/classes given at different universities (undergraduate/graduate classes) can be found here and are listed below:
- Emmanuel Candes' STAT 330 course: An Introduction to Compressed Sensing (Spring 2010)
- EE546/STAT593C - Sparse Representations: Theory, Algorithms, and Applications by Maryam Fazel and Marina Meila (Spring 2010)
- Convex Geometry in High-Dimensional Data Analysis, CS838 Topics In Optimization by Ben Recht (Spring 2010)
- Thomas Strohmer's course CS 280: Sparse Representations and Compressive Sensing (Spring 2010)
- Piotr Indyk : Sketching, Streaming and Sub-linear Space Algorithms (Fall 2007)
- Piotr Indyk : Streaming Etc. (at Rice University) Spring 2009.
- Ronitt Rubinfeld Sublinear Time Algorithms.( MIT 6.896 ) Fall 2010
- CS5540: Computational Techniques for Analyzing Clinical Data taught by Ramin Zabih and Ashish Raj (Spring 2010)
- Deanna Needell Non Asymptotic Random Matrix Theory CS 280 at UC Davis taught by Roman Vershynin (2009)
- Compressed Sensing by Mike Wakin on Connexions (2008)
- 2010S JEB1433 Medical Imaging at University of Toronto by Adrian Nachman (Spring 2010)
- Compressive Sensing by Mark Davenport, Richard Baraniuk, Ronald DeVore. (2007)
- Andrew McGregor, "Crash Course in Data Streams: Part I" and "Part II" (John Hopkins APL '10).
- EE 578 Optimization in System Sciences by Maryam Fazel (Winter 2010)
- Lecture notes entitled Compressed Sensing and Sparse Signal Processing by Wu-Sheng Lu (Nov 2010).
- 520.648 Compressed Sensing and Sparse Recovery by Trac D. Tran and Sang "Peter" Chin (John Hopkins, Spring 2011)
Emmanuel Candes was invited at the Centre for Mathematical Sciences in Cambridge, UK to give a series of lectures on compressed sensing. Here are the videos of these talks made at the LMS Invited Lecturer Series 2011:
Emmanuel Candes, Lecture 1: Some history and a glossy introduction
|MPEG-4 Video *||640x360||1.84 Mbits/sec||1.21 GB||View||Download|
|Flash Video||484x272||568.67 kbits/sec||372.57 MB||View||Download|
|iPod Video||480x270||506.21 kbits/sec||331.65 MB||View||Download|
|MP3||44100 Hz||125.0 kbits/sec||81.70 MB||Listen||Download|
Lecture 2: Probabilistic approach to compressed sensing
Lecture 3: Deterministic approach to compressed sensing
Lecture 4: Incoherent sampling theorem
Lecture 5: Noisy compressed sensing/sparse regression
|MPEG-4 Video *||640x360||1.84 Mbits/sec||1.18 GB||View||Download|
|Flash Video||484x272||568.7 kbits/sec||361.90 MB||View||Download|
|iPod Video||480x270||506.2 kbits/sec||322.13 MB||View||Download|
|MP3||44100 Hz||125.01 kbits/sec||79.35 MB||Listen||Download|
Lecture 6: Matrix completion
Lecture 7: Robust principal components analysis and some numerical optimization
Lecture 8: Some Applications and Hardware Implementations
Anders Hansen, Generalized sampling and infinite-dimensional compressed sensing
We will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases. Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular bases, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finite-dimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite-Dimensional Compressed Sensing.
Once you have graduated from any of these courses, you may want to take a peak at the Big Picture in Compressive Sensing that features some of the most recent measurement matrices and reconstruction solvers. You can also read the blog......or subscribe to the Nuit Blanche feed