More Unknowns than equations? Not a problem! Use Sparsity!

Duration: 1 hour 11 mins 51 secs
Share this media item:
Embed this media item:


About this item
Image inherited from collection
Description: David Donoho (Stanford University)
Monday 17 March 2008, 17:00-18:00
 
Created: 2008-03-18 17:10
Collection: Rothschild Seminars
Publisher: Isaac Newton Institute
Copyright: David Donoho
Language: eng (English)
Credits:
Author:  David Donoho
Producer:  Steve Greenham
 
Abstract: Everything you were taught about underdetermined systems of linear equations is wrong...

Okay, that's too strong. But you have been taught things in undergraduate linear algebra which, if you are an engineer or scientist, may be holding you back. The main one is that if you have more unknowns than equations, you're lost. Don't believe it. At the moment there are many interesting problems in the information sciences where researchers are currently confounding expectations by turning linear algebra upside down:

(a) An standard magnetic resonance imaging device can now produce a clinical-quality image using a factor 8 less time than previously thought.
(b) A Fourier imaging system can observe just the lowest frequencies of a sparse nonnegative signal and perfectly reconstruct all the unmeasured high frequencies of the signal.
(c) a communications system can transmit a very weak signal perfectly in the presence of intermittent but arbitrarily powerful jamming.
Moreover, in each case the methods are convenient and computationally tractable.

Mathematically, what's going on is a recent explosion of interest in finding the sparsest solution to certain systems of underdetermined linear equations. This problem is known to be NP-Hard in general, and hence the problem sounds intractable.

Surprisingly, in some particular cases, it has been found that one can find the sparsest solution by l¹ minimization, which is a convex optimization problem and so tractable. Many researchers are now actively working to explain and exploit this phenomenon. It's responsible for the examples given above.

In my talk, I'll discuss that this curious behavior of l¹ minimization and connect with some pure mathematics -- convex polytope theory and oriented matroid theory.
Ultimately, the pure math behind this phenomenon concerns some accessible but very surprising properties of random point clouds in very high dimensions: each point gets very neighborly!

I'll also explain the connection of this phenomenon to the Newton Institute's ongoing program "Statistical Theory and Methods for Complex, High-Dimensional Data".

Original web seminar at: http://www.newton.ac.uk/programmes/SCH/seminars/031717001.html
In association with the Newton Statistical Theory and Methods for Complex, High-Dimensional Data programme:
http://www.newton.ac.uk/programmes/SCH/
Available Formats
Format Quality Bitrate Size
MPEG-4 Video 480x360    1.84 Mbits/sec 993.70 MB View Download
WebM 480x360    588.96 kbits/sec 306.27 MB View Download
Flash Video 480x360    567.92 kbits/sec 298.87 MB View Download
iPod Video 480x360    505.41 kbits/sec 265.97 MB View Download
QuickTime 384x288    849.12 kbits/sec 446.85 MB View Download
MP3 44100 Hz 125.02 kbits/sec 65.59 MB Listen Download
Auto * (Allows browser to choose a format it supports)