Talk Abstracts

Mary WoottersHigh dimensional probability in theoretical computer science

Abstract Recently, techniques from probability in Banach spaces and geometric functional analysis have found applications in compressed sensing, coding theory, and other areas of theoretical computer science. In this tutorial, I will go over some useful tricks and techniques, with examples.

Slides are available here.


Brief bio Mary Wootters is a graduate student in the Department of Mathematics at the University of Michigan. Her interests lie mostly in applied probability, with applications including compressed sensing, matrix completion, coding theory, and group testing.
Swastik KoppartyPolynomials and Complexity Theory

Abstract This tutorial will discuss some of the amazing and unexpected applications of polynomials in complexity theory.

Slides are available here.


Brief bio Swastik Kopparty works primarily in Theoretical Computer Science, studying Error-Correcting Codes, Computational Complexity and Pseudorandomness. He has been a faculty member in the Mathematics and Computer Science Departments at Rutgers University since September, 2011. Prior to that, he had been a postdoctoral member in the School of Mathematics at the Institute for Advanced Study. He obtained his Ph.D. in Computer Science in 2010 from MIT, where his advisor was Madhu Sudan.
Jin-Yi CaiDichotomy for Counting Problems

Abstract Emil Post initiated the study of whether there are problems whose Turing degrees of computability are stirctly between Decidable and the Halting Problem. The famous solution came independently from Friedberg and Muchnik, who invented the finite injury priority argument. I would like to revisit Post's Problem, in the context of the complexity of counting probems. I will survey a number of recent dichotomy theorems which have the general flavor that, in a broad class of counting problems within #P, there are really only two levels of complexity: one is tractable, the other is complete. This is philosophically an anti-Friedberg-Muchnik theory, and the frameworks considered include (1) Graph Homomorphisms,(2) Counting, and (3) Holant Problems. http://arxiv.org/abs/0903.4728 http://arxiv.org/abs/1008.0683 http://arxiv.org/abs/1204.6445

Slides are available here.


Brief bio Jin-Yi Cai received his Ph.D. from Cornell University in 1986. After faculty positions at Yale, Princeton, and SUNY Buffalo, he is currently a Professor at the University of Wisconsin--Madison. His area of research is computational complexity, where he has written and published over 100 research papers. Currently he is primarily working on dichotomy theorems for counting problems, such as counting constraint satisfaction problems (CSP). These dichotomy theorems classify all problems within a broad class. His awards include a Sloan Fellowship and a Guggenheim Fellowship. He was elected a Fellow of ACM in 2001.
Jelani NelsonOSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

Abstract Let E be a d-dimensional linear subspace of R^n. A subspace embedding for E is a linear map that preserves the l_2 norms of all vectors in E up to 1+eps. We improve a recent result of [Clarkson-Woodruff, STOC'13] and with a much simpler proof: i.e. we show that the Thorup-Zhang sketch (a random sign matrix with one non-zero per column) is a subspace embedding with good probability when the number of rows is O(d^2/eps^2). Once the theorem is formulated the right way, the proof is a simple second moment computation. We then show our main result: that when one has m rows and s non-zeroes per column (the sparse Johnson-Lindenstrauss matrices of [Kane-Nelson, SODA'12]), one can take m = O(d^{1.01}/eps^2), s = O(1/eps). These are the first subspace embeddings to have m = o(d^2) with s = o(d). Our bounds imply improved running times for several numerical linear algebra problems, including approximating leverage scores, least squares regression, l_p regression for p in [1, infty), and low-rank approximation.  

Slides are available here.


Brief bio Jelani Nelson is an assistant professor of computer science at Harvard University. He is interested in streaming algorithms, dimensionality reduction, compressed sensing, and other algorithmic approaches for processing big data.
Andrew McGregorHomomorphic Sketches: Shrinking Big Data without Sacrificing Structure

Abstract Fingerprinting is a widely-used technique for efficiently verifying that two large files are identical. More generally, sketching is a form of lossy compression based on random linear projections that, for example, could be used to estimate the Hamming distance between the files. Both techniques are particularly useful for monitoring data streams or minimizing communication in distributed and parallel computing. Many applications have been studied extensively including sparse signal recovery, locality sensitive hashing, and linear regression. Our goal is to extend these popular techniques and design random projections that preserve combinatorial and group-theoretic structure in large data sets. In this talk, we present recent results on analyzing spectral graph properties and fingerprinting misaligned text. Both results rely on developing homomorphic sketches that support operations on the compressed data that correspond to operations on the uncompressed data. Applications include the first sublinear space algorithms for processing dynamic graph streams. Includes joint work with Kook Jin Ahn, Alexandr Andoni, Assaf Goldberger, Sudipto Guha, and Ely Porat.

Slides are available here.


Brief bio Andrew McGregor is an Assistant Professor at the University of Massachusetts Amherst. He received a B.A. degree and the Certificate of Advance Study in Mathematics from the University of Cambridge and a Ph.D. from the University of Pennsylvania. He then spent a couple of years as a postdoc at UCSD and Microsoft Research. His research in theoretical computer science focuses on data stream algorithms, linear sketching, and communication complexity. He received the NSF CAREER Award in 2010.
Shubhangi SarafIncidence geometry and applications to theoretical computer science

Abstract The classical theorem of Sylvester-Gallai states the following: given a finite set of points in the Euclidean plane, not all on the same line, there exists a line passing through exactly two of the points. This result has a rich and interesting history in mathematics. Surprisingly in recent years variants of the theorem have been influential in various diverse areas of theoretical computer science. In this talk I will describe several extensions to the Sylvester-Gallai theorem - quantitative versions, high dimensional versions, colorful versions and average case versions. I will also describe some of the applications and connections in theoretical computer science to areas such as polynomial identity testing, and local algorithms for error correcting codes. Based on joint works with Albert Ai, Zeev Dvir, Neeraj Kayal and Avi Wigderson.

Slides are available here.


Brief bio Shubhangi Saraf is an assistant professor in the computer science and mathematics departments at Rutgers University. She received her Bachelor's degree in Mathematics from MIT in 2007 and then a PhD degree in Computer Science from MIT in 2011. Shubhangi's research interests lie broadly in theoretical computer science. Her research has focused on complexity theory and property testing. More specifically she is interested in the complexity of arithmetic computation, the role of randomness in computing and in sublinear time algorithms for codes and other combinatorial structures.
Ronitt RubinfeldLocal Decompression

Abstract We present a simple adaptation of the Lempel Ziv 78' (LZ78) compression scheme ({\em IEEE Transactions on Information Theory, 1978}) that supports efficient random access to the input string. Namely, given query access to the compressed string, it is possible to efficiently recover any symbol of the input string. The compression algorithm is given as input a parameter $\eps >0$, and with very high probability increases the length of the compressed string by at most a factor of $(1+\eps)$. The access time is $O(\log n + 1/\eps^2)$ in expectation, and $O(\log n/\eps^2)$ with high probability. The scheme relies on sparse transitive-closure spanners. Any (consecutive) substring of the input string can be retrieved at an additional additive cost in the running time of the length of the substring. We also formally establish the necessity of modifying LZ78 so as to allow efficient random access. Specifically, we construct a family of strings for which $\Omega(n/\log n)$ queries to the LZ78-compressed string are required in order to recover a single symbol in the input string. The main benefit of the proposed scheme is that it preserves the online nature and simplicity of LZ78, and that for {\em every} input string, the length of the compressed string is only a small factor larger than that obtained by running LZ78. Joint work with Akashnil Dutta, Reut Levi and Dana Ron.  

Slides are available here.


Brief bio Ronitt Rubinfeld received her undergraduate degree at the University of Michigan and PhD at the University of California, Berkeley. She held postdoctoral positions at DIMACS and the Hebrew University at Jerusalem. After several years as a faculty member at Cornell University and a senior research scientist at NEC Research Institute, she is currently on the faculties of MIT and Tel Aviv University. She has been a recipient of the ONR Young Investigator award, NSF Career Award, Sloan Fellowship, Cornell Association for Computer Science Undergraduates Faculty of the Year award and a Cornell College of Engineering Teaching award. She was an invited speaker at the International Congress of Mathematicians in 2006. Her research focuses on sub-linear time algorithms for big datasets.
Amit ChakrabartiCertifying Equality With Limited Interaction

Abstract The EQUALITY problem---where Alice and Bob must decide if their respective inputs are equal---is usually one's first encounter with communication complexity. Its deterministic and randoemized complexity were settled decades ago, but we find several new things to say by considering two subtle aspects. First, we study the expected communication cost (at a worst-case input) for a protocol that uses limited interaction. Second, we study the information cost of such protocols. We obtain asymptotically optimal rounds-versus-communication and rounds-versus-information tradeoffs for EQUALITY. As an application of our information cost bounds, we obtain new, and asymptotically optimal, bounded-round randomized lower bounds for OR-EQUALITY and k-DISJOINTNESS (where input sets have size <= k).
Brief bio Amit Chakrabarti is an Associate Professor in the Department of Computer Science at Dartmouth College. He received an M.A. and a Ph.D. in Computer Science from Princeton University in 2002 and a B.Tech. in Computer Science from the Indian Institute of Technology, Bombay, along with the President of India Gold Medal, in 1997. Professor Chakrabarti's research is in the broad area of theoretical computer science. Specific interests are (1) complexity theory, especially communication complexity and the application of information theory, and (2) algorithms, including space-efficient algorithms for massive data streams and approximation techniques for optimization problems. His research has been funded by several awards from the National Science Foundation (including a CAREER award) and fellowships from Dartmouth College. For more information, please visit http://www.cs.dartmouth.edu/~ac.
Piotr IndykSample-Optimal Average-Case Sparse Fourier Transform

Abstract We present the first sample-optimal sub-linear time algorithms for the sparse Discrete Fourier Transform over a two-dimensional sqrt{n} x sqrt{n} grid. Our algorithms are ??analyzed for the average case signals. For signals whose spectrum is exactly sparse, we present algorithms that use O(k) samples and run in O(klogk) time, where k is the expected sparsity of the signal. For signals whose spectrum is approximately sparse, we have an algorithm that uses O(k log n) samples and runs in O(k log^2 n) time, for a wide range of k. All presented algorithms match the lower bounds on sample complexity for their respective signal models. The talk will cover the material from the joint papers with Badih Ghazi, Haitham Hassanieh, Dina Katabi, Eric Price and Lixin Shi. The papers are available at http://groups.csail.mit.edu/netmit/sFFT/  

Slides are available here.


Brief bio Piotr Indyk is a Professor of Electrical Engineering and Computer Science at MIT. He joined MIT in 2000, after earning PhD from Stanford University. Earlier, he received Magister degree from Uniwersytet Warszawski in 1995. Piotr's research interests lie in the design and analysis of efficient algorithms. Specific interests include: high-dimensional computational geometry, sketching and streaming algorithms, sparse recovery and compressive sensing. He has received the Sloan Fellowship (2003), the Packard Fellowship (2003) and the Simons Investigator Award (2013). His work on sparse Fourier sampling has been named to Technology Review "TR10" in 2012, while his work on locality-sensitive hashing has received the 2012 Kanellakis Theory and Practice Award.
David WoodruffHow Robust are Linear Sketches to Adaptive Inputs?

Abstract Linear sketches turn an n-dimensional input into a concise lower-dimensional representation via a linear transformation. In almost any realistic setting a linear sketch faces the possibility that its inputs are correlated with previous evaluations of the sketch. Known techniques no longer guarantee the correctness of the output in the presence of such correlations. We therefore ask: are linear sketches inherently non-robust to adaptively chosen inputs? We give a strong affirmative answer to this question. Specifically, we show that no linear sketch approximates the Euclidean norm of its input to within an arbitrary multiplicative approximation factor on a polynomial number of adaptively chosen inputs. The result remains true even if the dimension of the sketch is d = n - o(n) and the sketch is given unbounded computation time. Our result is based on an algorithm with running time polynomial in d that adaptively finds a distribution over inputs on which the sketch is incorrect with constant probability. Our result implies several corollaries for related problems including lp-norm estimation and l2/l2 compressed sensing in the presence of adaptively chosen inputs. Joint work with Moritz Hardt.

Slides are available here.


Brief bio David Woodruff joined the algorithms and complexity group at IBM Almaden in 2007 after completing his Ph.D. at MIT in theoretical computer science. His interests are in compressed sensing, communication, numerical linear algebra, sketching, and streaming.
Chris ReLP Relaxation as a Data Processing Primitive

Abstract Relaxing a hard combinatorial or statistical inference problem to a linear program has been an invaluable theoretical tool. But why should it remain only valuable in theory? Could we use this beautiful theoretical hammer to build systems? This talk argues maybe. A technical challenge is that linear programs are theoretically and practically difficult to run in parallel, which is critical to run efficiently on modern computers. A saving grace is that we often round the LP's solution, and so we may only need to solve the LP to low precision. This talk will give a few motivating examples, some recent, preliminary theoretical progress about solving such programs in parallel, and some initial implementation results. This is joint work with Ji Liu, Srikrishna Sridhar, and Steve Wright.
Brief bio Christopher (Chris) Re is an assistant professor in the Department of Computer Science at Stanford University. The goal of his work is to enable users and developers to build applications that more deeply understand and exploit data. Chris received his PhD from the University of Washington in Seattle under the supervision of Dan Suciu. For his PhD work in probabilistic data management, Chris received the SIGMOD 2010 Jim Gray Dissertation Award. Chris's papers have received four best-paper or best-of-conference citations, including best paper in PODS 2012, best-of-conference in PODS 2010 twice, and one best-of-conference in ICDE 2009). Chris received an NSF CAREER Award in 2011 and an Alfred P. Sloan fellowship in 2013.
Eric PriceSparse Recovery and Fourier Sampling

Abstract The goal of "stable sparse recovery" or "compressive sensing" is to recover a K-sparse approximation x' of a vector x in R^N from M linear measurements of x. This problem has a wide variety of applications, including streaming algorithms, image acquisition, and genetic testing. A related problem is that of computing a "sparse" Fourier transform, which approximates the Fast Fourier Transform in less than N log N time. In this talk I will survey the results in the area and describe general techniques used in solving them.

Slides are available here.


Brief bio Eric Price is going to graduate real soon now from MIT, supervised by Piotr Indyk. He mostly works on topics related to sparse recovery and Fourier transforms.
Adam SmithPrivacy and the Analysis of High-Dimensional Data

Abstract I'll explain why the curse of dimensionality takes on new dimensions in the context of private data analysis. After a general introduction, I will discuss recent results on differentially private sparse regression. Our results shed new light on the robustness of common techniques for solving sparse regression problems, such as the L1 regularization. Based on joint work with Dan Kifer (Penn State) and Abhradeep Guha Thakurta (now at MSR and Stanford).  

Slides are available here.


Brief bio Adam Smith is an associate professor in the Department of Computer Science and Engineering at Penn State. His research interests lie in cryptography, privacy and their connections to information theory, quantum computing and statistics. He received his Ph.D. from MIT in 2004 and was subsequently a visiting scholar at the Weizmann Institute of Science and UCLA.
Jonathan KelnerA Simple, Combinatorial Algorithm for Solving Laplacian Systems in Nearly-Linear Time Without Recursive Preconditioning

Abstract Fast algorithms for Laplacian linear systems have found broad applications across both the theory and practice of computer science, playing a foundational role in scientific computing, machine learning, random processes, image processing, network analysis, and computational biology, among others. More recently, they have emerged as a powerful tool in the design of graph algorithms, enabling researchers to break longstanding algorithmic barriers for a wide range of fundamental graph problems, including finding maximum flows and multicommodity flows, generating random spanning trees, sparsifying graphs, routing in distributed systems, and finding sparse cuts and balanced separators. Finding fast solvers for these systems has been an active area of research, culminating in algorithms that solve them in nearly-linear time. These solvers are quite complex, and developing them required multiple fundamental innovations in spectral and combinatorial graph theory, graph algorithms, and computational linear algebra, which were used to construct and analyze an intricate recursively preconditioned iterative solver. In this talk, I will give a very simple combinatorial algorithm that solves Laplacian systems in nearly-linear time. It uses very little of the machinery that previously appeared to be necessary for a such an algorithm. It does not require recursive preconditioning, or even the Chebyshev Method or Conjugate Gradient. After constructing a "nice" spanning tree of a graph associated to the linear system, the entire algorithm consists of the repeated application of a simple (non-recursive) update rule, which it implements using a lightweight data structure. The algorithm is numerically stable and can be implemented without the increased bit-precision required by previous solvers. As such, the algorithm presented has the fastest known running time under the standard unit-cost RAM model. This is joint work with Aaron Sidford, Zeyuan Zhu, and Lorenzo Orecchia.
Brief bio Jonathan Kelner is an Associate Professor of Applied Mathematics in the MIT Department of Mathematics and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research focuses on the application of techniques from pure mathematics to the solution of fundamental problems in algorithms and complexity theory. He was an undergraduate at Harvard University, and he received his Ph.D. in Computer Science from MIT in 2006. Before joining the MIT faculty, he spent a year as a member of the Institute for Advanced Study. He has received a variety of awards for his work, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, the Best Paper Awards at SIGMETRICS 2011 and STOC 2011, the Harold E. Edgerton Faculty Achievement Award, and the MIT School of Science Teaching Prize.
Richard LiptonNew and Old Graph Products

Abstract Graph products take two or more graphs and create a new graph. They have many applications in combinatorics and in theory. For example, the strong graph product plays a central role in the definition of the Shannon Capacity of a graph. In this talk I will discuss some of the classic graph products and then introduce several new ones. These have some interesting properties, have some interesting applications, and remain difficult to understand. I will present what I know about these new products and connect them to some important open problems from complexity theory.  

Slides are available here.


Brief bio Richard Lipton is the Storey Chair of Computer Science at Georgia Institute of Technology.
Shuheng ZhouReconstruction from Anisotropic Random Measurements

Abstract Random matrices are widely used in sparse recovery problems, and the relevant properties of ma- trices with i.i.d. entries are well understood. The current paper discusses the recently introduced Restricted Eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guar- anteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with subgaussian rows and non-trivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries. This is joint work with Mark Rudelson.  

Slides are available here.


Brief bio Shuheng Zhou is currently an assistant Professor in the Department of Statistics, with a courtesy appointment with the Department of Electrical Engineering and Computer Sciences at the University of Michigan, Ann Arbor. She received Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University, Pittsburgh, Pennsylvania. Her research interests include statistical machine learning, high dimensional statistics, and theoretical computer science, in particular, convex optimization, privacy, approximation and randomized algorithms, and network and combinatorial optimization.