I am part of the Networks Information Communications and Engineering Systems Laboratory (NICEST lab) at UIC.
My Google Scholar Page (with citations, h-index, i-index)
My dblp page (though all my papers are also available here)
NSF's support is gratefully acknowledged
Natasha Devroye's CV
Sorry, the next section has not been updated since 2014 -- my Publications and NSF grants reflect my current interests.
I work in the area of network information theory, with a particular focus on determining the information theoretic performance limits of cognitive networks, interference networks, two-way networks, and relay networks. I have also recently become interested in radar signal processing, in particular motivated by cognitive radar. In the future, I hope to look at whether/how information theory may be useful in other domains, open problems / ideas for directions welcome!
I am interested in determining the fundamental limits of how fast one can reliably communicate over networks (i.e. I seek the ``capacity'' of networks), an area of importance as we have come to expect rapid communications over evermore sophisticated and heavily utilized networks. Information theoretic bounds on capacity not only act as technology-independent benchmarks for measuring the performance of current systems, but may also guide industry and government on which directions to pursue. This is a challenging problem -- the capacity of even simple networks has been a long-standing open problem in information theory. Within network information theory, my research may be split along three lines:
acc Heading link
-
Cognitive networks
Cognitive networks supported by NSF CCF-1017436 “Fundamental Limits of Layered Wireless Networks) and the upcoming NSF CIF Small: Network Capacity when Some Common Information Theoretic Assumptions Break Down
Spectrum sensing and cognitive radio gained traction about 10 years ago and seek to solve the perceived shortage in spectrum by 1) cleverly sharing the spectrum between devices, and 2) employing the new cognitive radio technology in which wireless devices are able to sense and adapt to their environment.
While a Ph.D. student under the supervision of Vahid Tarokh at Harvard, we wrote a TransIT 2006 paper together with colleague Patrick Mitran in which we modeled the communication in a network with a primary user (with priority access to the spectrum) and a secondary or cognitive user (seeking to employ cognitive radio technology to access the same spectrum as the primary) in a new, information theoretic framework termed the cognitive interference channel. This paper now has over 850 citations and is considered by some to be the seminal paper in the information theoretic study of cognitive networks; “cognition” in the information theory community has come to mean “non-causal / a-priori message knowledge at some of the nodes in the network”. I see the main contribution as being the introduction of the rigorous, information theoretic study of a network in which a primary and a secondary node co-exist, which may be done in several fashions as outlined in several of our book chapters on the subject (1, 2, 3, 4, 5). This came at a time when most work in the cognitive arena was focussed on “white-space filling” or “interference-temperature”-like schemes. Non-mathematical introductions may be found in our 2006 IEEE Comm. Magazine and 2008 IEEE Sig. Proc. Magazine articles, as well as through various introductory tutorials and talks found in my Presentations.
Since then, both while a graduate student at Harvard and now at UIC, my research has focussed on understanding cognitive networks through a combination of capacity results, novel inner and outer bounds, (generalized) degrees of freedom analysis and scaling law and constant-gap-to-capacity results. Specifically: the first model and study of the scaling laws of co-existing primary and secondary networks (TransIT 2011, TransWC 2009), cognitive channels with oblivion constraints (best paper award at CROWNCOM 2011), new capacity and constant-gap-to-capacity, as well as the best known inner and outer bounds for the discrete memoryless and Gaussian cognitive interference channels (TransIT 2012, TransIT 2011), and extensions to the ergodic cognitive interference channel (Trans WC submission 2014), the K-user cognitive interference channel (JSAC 2014) and the interference channel with a cognitive relay (TransIT 2014).
Recently, I’m excited about this 2014 Trans IT submission, where we show that for interference channels with partial codebook knowledge (inspired by cognitive networks where certain nodes (e.g. primary) are legacy nodes and do not possess codebooks of secondary / cognitive nodes) , this does not hamper performance “much” (not at all in the generalized degrees of freedom sense and only to within a constant gap for Gaussian networks) compared to the same network with full codebook knowledge. We show this using discrete PAM (rather than Gaussian) inputs in the Gaussian channel, using new techniques which are of theoretical interest in and of themselves. Surprisingly, even when each receiver in an interference channel has only its own codebook (and not that of the interfering signal as is usually assumed in an interference channel), using a combination of Gaussian and discrete inputs allows one to achieve the same sum-generalized degrees of freedom and to within an additive gap of O(1) or O(log log(SNR)) to the symmetric sum-capacity of the classical IC, see our ISIT 2014 paper, with its journal version to be submitted any day.
-
Two-way networks
Two-way networks supported by NSF CAREER “Foundations for Two-way Communication Networks”
This recent line of work (very different from the cognitive line) focuses on obtaining capacity results for two-way communication networks, where multiple pairs of nodes wish to exchange streams of information in a two-way / interactive fashion by adapting their next transmission, based on previously received signals, to improve data rates. Little is understood about two-way networks (despite their relevance) and current systems treat two-way communications as two one-way communication links, which is generally sub-optimal from a capacity perspective. In our Trans IT 2014 paper, my student Zhiyu Cheng and I defined and demonstrated several classes of two-way networks for which adaptation — or adapting current channel inputs based on previously received outputs — either does not increase capacity, or can only increase it by a finite number of bits per channel use. The key techniques used were to derive new outer bounds allowing for two-way adaptation at the transceivers in the two-way networks, and showing these to be exactly (or approximately) achievable using non-adaptive techniques. For some networks, this shows that the simple method of orthogonalizing the two directions of communication is not too bad from a capacity perspective. In my recent submission, we obtain the degrees of freedom (DoF) for two-way K-pair-user interference channels with and without (causal and non-causal) relays. We show that the two-way K-user interference channel without relays has K DoF (K/2 in each direction, thus adaptation is not needed! Outer bounds are the contribution), that a non-causal / instantaneous relays with enough antennas can completely mitigate all interference to achieve the maximal 2K DoF (achievability is the contribution), and that a causal relay cannot increase the DoF beyond the relay-free K (outer bound is the contribution).
I have also worked on two-way relay networks, including an early and well cited TransIT 2011 paper on the single relay two-way relay network, the multi-terminal two-way relay network ISIT 2011, ISIT 2010 and a recent JSAC 2014 paper in which we present a novel lattice-based scheme for a two-way line network which is able to achieve to within a constant number of bits — independent of the number of relays — of capacity.
In January 2012 I organized a 5-day workshop exclusively on the topic of �Interactive Information Theory� at the Banff International Research Station (my workshop proposal was selected for sponsorship, i.e. 5 days all expense paid workshop for 42 leaders in this field); I have also given a tutorialat the 2010 IEEE Sarnoff Symposium in Princeton on two-way networks, and was an invited speaker at the 2013 Workshop on Sequential and Adaptive Information Theory.
Relay networks supported by NSF CCF-1216825 “Wireless relay networks: coding above capacity and exploiting structure”
Here, my work has focussed on the use of lattice codes in relay networks. Lattice codes are interesting alternative to classically used i.i.d. random codes, as they are linear codes, and hence the sum of two codewords is again a codeword. This may sometimes be exploited to achieve higher rates than those of i.i.d. Gaussian random codes. In my TransIT 2013 paper with my 1st graduated Ph.D. student Yiwei Song, we developed a new lattice list decoding technique which we used to demonstrate that lattice codes may be used to achieve the same performance as known i.i.d. Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multisource relay networks. We also presented a lattice compress-and-forward (CF) scheme for the Gaussian relay channel which exploits a lattice Wyner�Ziv binning scheme and achieves the same rate as the Cover�El Gamal CF rate evaluated for Gaussian random codes. In our forthcoming JSAC 2014 paper, we devised a novel lattice coding scheme for the two-way line network, where each relay decodes the sum of several signals (using lattice codes) and then re-encodes it into another lattice codeword. Interestingly, this scheme allows one to achieve to within a constant gap — irrespective of the number of relays in the line network, of the capacity of two one-way line networks operating in parallel — i.e. the two directions decouple.
Building on Nazer + Gastpar’s compute-and-forward framework for decoding sums of messages (encoded via lattice codewords) in relay networks, we have also defined and obtained capacity for the inverse compute-and-forward (ICF)channel ISIT 2013, TransIT 2014 submission. We have obtained the capacity region of the Gaussian ICF channel where we extract, over the air, individual messages from sources which have sums of messages (essentially the opposite of what compute-and-forward does, and results in an interesting region which shows that higher order (than 2) correlations may not be exploited to increase capacity.
This line of work suggests that structured/lattice codes may be used to mimic, and sometimes outperform, random Gaussian codes in general Gaussian networks.
-
Radar signal processing
Radar signal processing supported by AFOSR under award FA9550-10-1-0239, as well as by a Dynetics grant on “Fully Adaptive Radar” and upcoming grant NSF EARS: Collaborative Research: Let’s share CommRad — spectrum sharing between communications and radar systems
While not my main research area, I have also worked on several radar signal processing problems motivated by cognitive radar — i.e. radar that somehow has additional side-information about and/or is able to adapt in real-time to the radar environment. In particular, my journal papers in EURASIP 2013, JSTSP 2014 and IEEE Trans. on AES 2014 all deal with different ways to exploit multi path in radar systems when one has knowledge of the scene geometry. When the multi path are resolvable, these different components in some way start to resemble additional “looks” at a target or scene and may be used to improve detection, localization, and imaging performance. In collaboration with my former post-doc Dr. Pawan Setlur, now research scientist at the AFRL, we have a series of conference papers on waveform scheduling and design using two-step mutual information which may be found in the Publications.