News
Overview of the Symposium:
Envisioning the future of signal processing
by Alison Takemura | EECS
We live in a sea of signals. “They’re natural, manmade, medical; speech, music; synthetic; physical signals, and communication,” says Meir Feder, professor of electrical engineering and Information Theory Chair at Tel Aviv University.
For example, when we snap a photo with our phones, record a funny cat video, or tell Siri to write a text, technology is taking signals from the environment — for instance, analog audio and visual information — and making it digital: into ones and zeros that machines can read. The information comes out the other end, where it appears reanimated, or processed, into something we can use and understand.
There’s a whole field devoted to studying how to listen and watch for such signals, and then transform and translate them. Most of us have just never heard of it — which might be a testament to its impact. “I think that one indicator of the power of signal processing is that it doesn’t get credit anymore,” says Anantha Chandrakasan,Vannevar Bush Professor of Electrical Engineering and Computer Science (EECS), and Dean of the MIT School of Engineering. “It’s everywhere.”
On Oct. 22 and 23, a group of researchers marked the 80th birthday of one of the pioneers in the field of digital signal processing (DSP): Alan V. Oppenheim, MIT’s Ford Professor of Engineering. With Ron Schafer, professor emeritus at Georgia Tech, Oppenheim coauthored the textbook Discrete-Time Signal Processing. “Some edition of this book is — or should be — on every DSP engineer’s shelf,” says Tom Baran, research affiliate in MIT’s Research Laboratory of Electronics’ DSP Group, co-founder and CEO of Lumii, and the lead organizer of both a dinner in Oppenheim’s honor and the Future of Signal Processing Symposium.
During the day-long symposium, researchers defined the next wave of problems that this field will tackle. These included applications in security, forensics, and health. The researchers also described some unexpected areas of science that will help propel the field: quantum physics, 19th-century algebra, and a signal’s customary nemesis: noise.
These are selected images; to see all the day’s photos, view the full album. Photos: Gretchen Ertl.
A signal in the darkness
Signal processing can make us safer, Chandrakasan says. His research group is developing processing techniques to enhance the security of devices that comprise the so-called Internet of Things (IoT). “Everything that can be connected to the Internet wirelessly can be hacked,” says Chandrakasan, a member of the symposium’s organizing committee.
Data in the right hands, of course, can be a security boon. Symposium speaker Admiral John Richardson, 31st Chief of Naval Operations for the U.S. Navy, explains that physical ship-building can’t keep up with the Navy’s actual demand. Instead of only more ships, there’s a need to make better ships, he says. Using advanced signal processing, a fully networked fleet would be able to listen in the water and respond with greater coordination, giving them a tactical advantage.
“Signal processing has a terrific and important role in making our Navy more capable,” Richardson says.
New approaches in signal processing could also support combating terrorism and locating criminals. Another symposium speaker, Min Wu, professor of electrical and computer engineering at the University of Maryland, illustrates the point with a video recording of Osama bin Laden. “Many people fighting terrorism want to know when the video was shot, where the video was shot,” she says.
To help answer that question, Wu and her team have developed signal processing techniques to use tiny variations in the electric grid that result from the miniscule changes in its electric frequency that happen all the time. For a recording inside a room, for example, that might translate to an ever-so-slight flickering of the lights. Outside, it could be the subtle changes in ambient sounds of power equipment connected to the grid.
The variations allow researchers to localize the signal to whichever electric grid has the same fingerprint. Right now, it’s possible to differentiate recordings done in places with completely separate electric grids – for instance, distinguishing between recordings made in the western United States from those done in India or Lebanon. Even that level could help narrow down the locations of terrorist cells, Wu says. The U.S. Department of Homeland Security has also approached her to help determine where victims of child pornography were filmed.
Richard Baraniuk, professor of electrical and computer engineering at Rice University and founder and director of OpenStax, spoke at the symposium about how signal processing can help crack open the black box of why machine learning is so effective.
Another speaker, Martin Vetterli — president of École Polytechnique Fédérale de Lausanne — dazzled the audience by talking about his group’s recent effort in high-quality digital acquisition and rendering of rare artifacts. He showed how to revive the Lippmann photography method by making the process digital in order to create astonishingly vivid images. He also presented a process of virtual relighting applied to one of the oldest well-preserved New Testament manuscripts, called Papyrus 66. Despite being projected on a screen, the papyrus looked as real as if it were right in front of the audience.
Body Signals
Signal processing can help manage internal threats as well as those from the outside. Chandrakasan and his group have developed a low-power cap of electrodes to detect changes in brain-wave patterns that herald an oncoming seizure. By alerting patients eight to 10 seconds in advance, the technology allows them to move into a safer position or environment.
Another application is in the treatment of cancer. Symposium speaker Ron Weiss — an MIT professor in both the departments of Biological Engineering and EECS and the director of MIT’s Synthetic Biology Center — and his group have developed proof-of-concept biological circuits. These process biochemical signals into a desirable outcome: targeting and destroying cancer cells.
How this currently works is that an engineered virus is injected into the blood stream of a mouse. From there, it makes its way into a cell. Then it does a computation: does it sense the right combination of four to six biomarkers that indicate the cell is cancerous? If the answer is “yes,” the virus flips into “destroy” mode. This kind of biological circuit is itself a signal processing system.
Processing updates
As new applications emerge in signal processing, novel approaches are brewing as well.
Drawing on work pioneered by Oppenheim and his then-student Yonina Eldar (now professor of electrical engineering at Technion and a member of the symposium’s organizing committee), Isaac Chuang believes quantum physics will play a role in signal processing.
“Signals from the physical world are actually quantum as they come in,” says Chuang, another symposium speaker who is professor of EECS and physics and Senior Associate Dean of Digital Learning at MIT. “The faintest light from the moon is a quantum signal.” Quantum computing — replacing the ones and zeroes of traditional computers with quantum states — could make calculations for processing signals faster.
Math from the 19th century could also provide a boost to signal processing, says Feder, of Tel Aviv University. Take the quaternion: an extension of complex numbers, but with four elements instead of two. It’s useful for representing certain signals that correspond to the location and orientation of a three-dimensional body in space, like a rotation, he says.
Quaternions may not be the only promising mathematics that could be useful in future signal processing. They’re a special case of a broader branch called Clifford algebras, says Petros Boufounos, senior principal research scientist at Mitsubishi Electric Research Laboratories, adding that all Clifford algebras deserve a second look. “They provide you with amazing structure,” says Boufounos, another member of the symposium’s organizing committee.
Finally, noise and randomness, the historical foes of signals, may prove beneficial. “Intentional randomness is something we don’t completely understand,” Boufounos says. But it can improve performance, he adds. Boufounos shows a picture of a videographer on MIT’s campus with its buildings in the background. When filtered to remove pixels with less contrast, the background disappears. But adding noise brings those features, previously lost, back.
“Randomness can be very useful if we properly harness it,” Boufounos says.
The horizon of innovations in signal processing seems endless. There is no want in demand, according to Oppenheim. “There will always be signals,” he’s often said. “And they will always need processing.”
The Future of Signal Processing Symposium opened with remarks from MIT Provost Martin A. Schmidt and concluded with a panel discussion, “The Venn Diagram Between Data Science, Machine Learning, and Signal Processing,” moderated by Oppenheim. Panelists included Eldar and Schafer, along with Prof. Asu Ozdaglar, recently named head of MIT’s EECS Department; Prof. Alexander Rakhlin of the University of Pennsylvania; and Prof. Victor Zue, Delta Electronics Professor of EECS at MIT.
The event’s lead organizer was Tom Baran, a research affiliate in MIT’s DSP group and CEO and co-founder of Lumii. Organizing committee members included Boufounos, Chandrakasan, and Eldar.
For more on the Future of Signal Processing Symposium, please see the event photo album and the videos of speaker presentations. To view the original version of this article and a related slide show, please visit the EECS website.