I am part of the Networks Information Communications and Engineering Systems Laboratory (NICEST lab) at UIC.
My Google Scholar Page (with citations, hindex, iindex)
My dblp page (though all my papers are also available here)
NSF's support is gratefully acknowledged
Sorry, the next section has not been updated since 2014  my Publications and NSF grants reflect my current interests.
I work in the area of network information theory, with a particular focus on determining the information theoretic performance limits of cognitive networks, interference networks, twoway networks, and relay networks. I have also recently become interested in radar signal processing, in particular motivated by cognitive radar. In the future, I hope to look at whether/how information theory may be useful in other domains, open problems / ideas for directions welcome!
I am interested in determining the fundamental limits of how fast one can reliably communicate over networks (i.e. I seek the ``capacity'' of networks), an area of importance as we have come to expect rapid communications over evermore sophisticated and heavily utilized networks. Information theoretic bounds on capacity not only act as technologyindependent benchmarks for measuring the performance of current systems, but may also guide industry and government on which directions to pursue. This is a challenging problem  the capacity of even simple networks has been a longstanding open problem in information theory. Within network information theory, my research may be split along three lines:
acc

Cognitive networks
Cognitive networks supported by NSF CCF1017436 “Fundamental Limits of Layered Wireless Networks) and the upcoming NSF CIF Small: Network Capacity when Some Common Information Theoretic Assumptions Break Down
Spectrum sensing and cognitive radio gained traction about 10 years ago and seek to solve the perceived shortage in spectrum by 1) cleverly sharing the spectrum between devices, and 2) employing the new cognitive radio technology in which wireless devices are able to sense and adapt to their environment.
While a Ph.D. student under the supervision of Vahid Tarokh at Harvard, we wrote a TransIT 2006 paper together with colleague Patrick Mitran in which we modeled the communication in a network with a primary user (with priority access to the spectrum) and a secondary or cognitive user (seeking to employ cognitive radio technology to access the same spectrum as the primary) in a new, information theoretic framework termed the cognitive interference channel. This paper now has over 850 citations and is considered by some to be the seminal paper in the information theoretic study of cognitive networks; “cognition” in the information theory community has come to mean “noncausal / apriori message knowledge at some of the nodes in the network”. I see the main contribution as being the introduction of the rigorous, information theoretic study of a network in which a primary and a secondary node coexist, which may be done in several fashions as outlined in several of our book chapters on the subject (1, 2, 3, 4, 5). This came at a time when most work in the cognitive arena was focussed on “whitespace filling” or “interferencetemperature”like schemes. Nonmathematical introductions may be found in our 2006 IEEE Comm. Magazine and 2008 IEEE Sig. Proc. Magazine articles, as well as through various introductory tutorials and talks found in my Presentations.
Since then, both while a graduate student at Harvard and now at UIC, my research has focussed on understanding cognitive networks through a combination of capacity results, novel inner and outer bounds, (generalized) degrees of freedom analysis and scaling law and constantgaptocapacity results. Specifically: the first model and study of the scaling laws of coexisting primary and secondary networks (TransIT 2011, TransWC 2009), cognitive channels with oblivion constraints (best paper award at CROWNCOM 2011), new capacity and constantgaptocapacity, as well as the best known inner and outer bounds for the discrete memoryless and Gaussian cognitive interference channels (TransIT 2012, TransIT 2011), and extensions to the ergodic cognitive interference channel (Trans WC submission 2014), the Kuser cognitive interference channel (JSAC 2014) and the interference channel with a cognitive relay (TransIT 2014).
Recently, I’m excited about this 2014 Trans IT submission, where we show that for interference channels with partial codebook knowledge (inspired by cognitive networks where certain nodes (e.g. primary) are legacy nodes and do not possess codebooks of secondary / cognitive nodes) , this does not hamper performance “much” (not at all in the generalized degrees of freedom sense and only to within a constant gap for Gaussian networks) compared to the same network with full codebook knowledge. We show this using discrete PAM (rather than Gaussian) inputs in the Gaussian channel, using new techniques which are of theoretical interest in and of themselves. Surprisingly, even when each receiver in an interference channel has only its own codebook (and not that of the interfering signal as is usually assumed in an interference channel), using a combination of Gaussian and discrete inputs allows one to achieve the same sumgeneralized degrees of freedom and to within an additive gap of O(1) or O(log log(SNR)) to the symmetric sumcapacity of the classical IC, see our ISIT 2014 paper, with its journal version to be submitted any day.

Twoway networks
Twoway networks supported by NSF CAREER “Foundations for Twoway Communication Networks”
This recent line of work (very different from the cognitive line) focuses on obtaining capacity results for twoway communication networks, where multiple pairs of nodes wish to exchange streams of information in a twoway / interactive fashion by adapting their next transmission, based on previously received signals, to improve data rates. Little is understood about twoway networks (despite their relevance) and current systems treat twoway communications as two oneway communication links, which is generally suboptimal from a capacity perspective. In our Trans IT 2014 paper, my student Zhiyu Cheng and I defined and demonstrated several classes of twoway networks for which adaptation — or adapting current channel inputs based on previously received outputs — either does not increase capacity, or can only increase it by a finite number of bits per channel use. The key techniques used were to derive new outer bounds allowing for twoway adaptation at the transceivers in the twoway networks, and showing these to be exactly (or approximately) achievable using nonadaptive techniques. For some networks, this shows that the simple method of orthogonalizing the two directions of communication is not too bad from a capacity perspective. In my recent submission, we obtain the degrees of freedom (DoF) for twoway Kpairuser interference channels with and without (causal and noncausal) relays. We show that the twoway Kuser interference channel without relays has K DoF (K/2 in each direction, thus adaptation is not needed! Outer bounds are the contribution), that a noncausal / instantaneous relays with enough antennas can completely mitigate all interference to achieve the maximal 2K DoF (achievability is the contribution), and that a causal relay cannot increase the DoF beyond the relayfree K (outer bound is the contribution).
I have also worked on twoway relay networks, including an early and well cited TransIT 2011 paper on the single relay twoway relay network, the multiterminal twoway relay network ISIT 2011, ISIT 2010 and a recent JSAC 2014 paper in which we present a novel latticebased scheme for a twoway line network which is able to achieve to within a constant number of bits — independent of the number of relays — of capacity.
In January 2012 I organized a 5day workshop exclusively on the topic of �Interactive Information Theory� at the Banff International Research Station (my workshop proposal was selected for sponsorship, i.e. 5 days all expense paid workshop for 42 leaders in this field); I have also given a tutorialat the 2010 IEEE Sarnoff Symposium in Princeton on twoway networks, and was an invited speaker at the 2013 Workshop on Sequential and Adaptive Information Theory.
Relay networks supported by NSF CCF1216825 “Wireless relay networks: coding above capacity and exploiting structure”
Here, my work has focussed on the use of lattice codes in relay networks. Lattice codes are interesting alternative to classically used i.i.d. random codes, as they are linear codes, and hence the sum of two codewords is again a codeword. This may sometimes be exploited to achieve higher rates than those of i.i.d. Gaussian random codes. In my TransIT 2013 paper with my 1st graduated Ph.D. student Yiwei Song, we developed a new lattice list decoding technique which we used to demonstrate that lattice codes may be used to achieve the same performance as known i.i.d. Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multisource relay networks. We also presented a lattice compressandforward (CF) scheme for the Gaussian relay channel which exploits a lattice Wyner�Ziv binning scheme and achieves the same rate as the Cover�El Gamal CF rate evaluated for Gaussian random codes. In our forthcoming JSAC 2014 paper, we devised a novel lattice coding scheme for the twoway line network, where each relay decodes the sum of several signals (using lattice codes) and then reencodes it into another lattice codeword. Interestingly, this scheme allows one to achieve to within a constant gap — irrespective of the number of relays in the line network, of the capacity of two oneway line networks operating in parallel — i.e. the two directions decouple.
Building on Nazer + Gastpar’s computeandforward framework for decoding sums of messages (encoded via lattice codewords) in relay networks, we have also defined and obtained capacity for the inverse computeandforward (ICF)channel ISIT 2013, TransIT 2014 submission. We have obtained the capacity region of the Gaussian ICF channel where we extract, over the air, individual messages from sources which have sums of messages (essentially the opposite of what computeandforward does, and results in an interesting region which shows that higher order (than 2) correlations may not be exploited to increase capacity.
This line of work suggests that structured/lattice codes may be used to mimic, and sometimes outperform, random Gaussian codes in general Gaussian networks.

Radar signal processing
Radar signal processing supported by AFOSR under award FA95501010239, as well as by a Dynetics grant on “Fully Adaptive Radar” and upcoming grant NSF EARS: Collaborative Research: Let’s share CommRad — spectrum sharing between communications and radar systems
While not my main research area, I have also worked on several radar signal processing problems motivated by cognitive radar — i.e. radar that somehow has additional sideinformation about and/or is able to adapt in realtime to the radar environment. In particular, my journal papers in EURASIP 2013, JSTSP 2014 and IEEE Trans. on AES 2014 all deal with different ways to exploit multi path in radar systems when one has knowledge of the scene geometry. When the multi path are resolvable, these different components in some way start to resemble additional “looks” at a target or scene and may be used to improve detection, localization, and imaging performance. In collaboration with my former postdoc Dr. Pawan Setlur, now research scientist at the AFRL, we have a series of conference papers on waveform scheduling and design using twostep mutual information which may be found in the Publications.