Included page "clone:oberwolfach2015" does not exist (create it now)

To add your research report (brief description of your current research interests and recent work):

**sign in**via the link in the*upper-right*corner**username:**"ow2015"**password:**last name (all*lowercase*) of the person who discovered matrix multiplication in time $n^{\mathrm{log}_2 7}$

- click on the
**edit**button in the*lower-right*corner of this page. **copy**the sample entry (or any other) and edit it to write your report**paste**your edited entry in place of your name in the alphabetized list**save**the edited wiki

(Actually, we recommend that you copy the sample entry to an offline editor, exit the wiki editor while writing your report, and copy it back into the wiki after you are finished. Please do not keep the wiki page in edit mode for long, since you may lock other users from editing the text. )

Research report goes here. Here is how to link to a webpage.

I have recently been studying the "Sum of Squares" algorithm. Specifically I've been interested in problems like Unique Games, Planted Clique, and Machine Learning problems such as Dictionary Learning / Sparse Coding. I've also been interested in finding out to what extent this algorithm is optimal in some domains, and whether it can be used to give a theory of "computational knowledge" via a computational analog of Bayesian probabilities.

I have recently been studying tensors of minimal border rank. Such tensors are used as starting

constructions for fast matrix multiplication. For instance, the tensor used by Strassen in the so-called laser method or the (second) tensor used by Coppersmith and Winograd are tensors of minimal border rank.

Understanding the structure of such tensors might yield new fast matrix multiplication algorithms. Or might explain why it has become so hard to get new upper bounds for the exponent of matrix multiplication.

Geometry of Numbers: With some students I have been looking at the number of Voronoi- relevant vectors of lattices for arbitrary

norms. We showed, that unlike the euclidean case, already for the l_3 norm, the number of Voronoi-relevant vectors cannot be bounded by a function of the lattice dimension alone. This work was motivated by the single-exponential time algorithm of Micciancio and Voulgaris for SVP, CVP. …. Basically, our results show that there is no hope of generalizing the algorithm to general norms. The paper should be ready by the time of the workshop.

Clustering: Among other things we have been looking at a generalization of the k-means problems, the fuzzy k-means problem. We show that for this problem optimal solutions cannot be represented by radicals. In this respect the fuzzy k-means problem resembles the euclidean k-median problem. We also developed several algorithms that compute near optimal solutions provided that optimal solutions are not too degenerate.

I've been working for a while on creating simpler and more secure fully homomorphic encryption schemes, trying to simplify the constructions and hardness assumptions as much as possible. Most recently Vinod and I showed how to get "leveled" schemes that rely on the hardness of polynomial approximation to worst case lattice problems, via the learning with errors (LWE) problem.

I have also recently worked on trying to construct simpler and more secure program obfuscators. Showing that security can be proved in a "generic" model (works with G. Rothblum) and trying to obfuscate circuits directly without going through branching programs as previous approach (work with Applebaum).

Another line of work (mostly with G. Segev) is focused on the power of functional encryption (encryption where different secret keys decrypt the same ciphertext to different values, i.e. decrypting Enc(x) with sk_f will output f(x)). We showed that this primitive is strong enough that you can get many features "for free" without making additional cryptographic assumptions.

I have been working in the following directions:

1) Applications of information complexity:

a) To prove a nearly-tight direct product theorem for two-party randomized communication complexity B-Weinstein'14, B-Rao-Weinstein-Yehudayoff'12

b) To prove a tight small-value parallel repetition theorem for general games B-Garg'15

c) To obtain nearly-tight bounds on bounded round quantum CC of Disjointness B-Garg-Ko-Mao-Touchette'15

d) Lower bounds in the distributed number-in-hand model B-Ellen-Oshman-Pitassi-Vaikuntanathan'13, B-Oshman'15

2) Interactive coding theory. For example, list-decodable codes for interactive communication B-Efremenko'14

3) The effect of noise on the computational properties of dynamical systems B-Rojas-Schneider'15

4) Replacing planted clique hardness with ETH hardness for problems for which quasi-polynomial algorithms exist, such as computing good approximate Nash equilibria B-Ko-Weinstein'14 and approximating the densest subgraph assuming perfect completeness B-Ko-Rubinshtein-Weinstein'15

Some recent works (RW) and questions (Q) that I am interested in:

- RW: One-Way Functions (OWFs) from NP-hardness: Impossibility of basing regular and co-range verifiable OWFs on NP-complete problems via adaptive reductions unless NP is in coAM

- Indistinguishability obfuscation (iO):

*RW: iO is mutually exclusive with other cryptographic assumptions

*Q: relax correctness of iO to be approximate. Can we get statistical security? (perfectly correct iO cannot be statistically secure unless NP in coAM) This would solve the long-standing open problem of building key agreement from OWFs in a non-black-box way. There is an interesting paper by Vinod Vaikunthanatan und Nir Bitansky on approximate iO.

- Q: How hard is it to learn symmetric juntas? Generally interested in a better understanding of juntas and predicates that are good/bad for juntas - and predicates that are good/bad for Goldreich's OWF

- RW: Arithmetic cryptography: Which cryptographic primitives over fields/rings can be computed merely by arithmetic operations?

All papers available at my webpage.

In the past few years I have been working in two areas:

1) Understanding better the geometric complexity theory approach (lower bounds). There have been quite a lot of activities stimulated by a semester at the Simons Institute. See my work with Ikenmeyer Explicit Lower Bounds via Geometric Complexity Theory that analyzes the GCT approach in the tensor setting (motivated by matrix multiplication). See also Permanent versus determinant: not via saturations with Ikenmeyer and H''uttenhain on the insight that representation theoretic obstructions for the permanent versus determinant problem must be “holes” of the monoid of representation of det_n. In this context, I find the recent work On vanishing of Kronecker coefficients of Ikenmeyer, Mulmuley, and Walter exciting.

2) Probabilistic analysis of numerical algorithms (smoothed analysis), mostly via analyzing condition. This is motivated by the wish to understand the complexity of numerical problems (that are sometimes ill-posed), but it is not complexity theory in the usual sense. I recently completeted a monograph Condition: the geometry of numerical algorithms with Felipe Cucker on this (Springer). Curiously, even the basic problem of computing eigenpairs of matrices stably and efficiently is not completely understood. See my work with A stable, polynomial-time algorithm for the eigenpair problem with Armentano, Beltran, Cucker, and Shub.

Over the past couple of years my research has broadly focussed on explicit constructions of randomness extractors and related applications. Here are the relevant results:

(1) Two-Source Extractors: In a recent work with David Zuckerman [http://eccc.hpi-web.de/report/2015/119/], we construct an explicit two-source extractor for polylogarithmic min-entropy.The previous best known construction was by Bourgain which requires 0.499n min-entropy. Our explicit extractor directly implies explicit bipartite Ramsey graphs improving parameters of a celebrated work by Barak et al. and matching recent work of Cohen.

Our techniques also have other implications, including better extractors for non-oblivious bit-fixing sources and explicit resilient functions (used in 1-round coin-flipping protocols).

(2) Seeded Non-Malleable Extractors: Such extractors have applications in privacy amplification (introduced by Dodis-Wichs). In a work with Vipul Goyal and Xin Li [http://arxiv.org/abs/1505.00107], we construct explicit seeded non-malleable extractors for polylogarithmic min-entropy. The previous best known result (by Li) required min-entropy 0.499n. Our non-malleable extractor was used in the above mentioned 2-source extractor result.

(3) Seedless Non-Malleable Extractors: These extractors were introduced by Cheraghchi-Guruswami as a way of constructing non-malleable codes. In a work with David Zuckerman [http://eccc.hpi-web.de/report/2014/102/ ], we constructed explicit seedless non-malleable extractors for 10 sources based on techniques from additive combinatorics.

In subsequent work with Vipul Goyal and Xin Li (the same paper as mentioned above), we reduced the number of sources to 2. Previously, no explicit construction was known for even full min-entropy.

Applications to Non-Malleable Codes (information theoretic setting): We constructed the first constant rate non-malleable codes in the split-state model based on the 10-source seedless extractor mentioned above. Subsequently, the mentioned work with Vipul and Xin implies more robust versions of non-malleable codes where the adversary can tamper the codeword multiple times.

(4) Extractors for Interleaved Sources: This is a generalization of 2-source extractors where the positions of the bits of each source is fixed but unknown. In joint work with David Zuckerman [http://www.cs.utexas.edu/~eshanc/interleaved.pdf], we obtain improved results and find some applications of such extractors.

In recent years my research focused on pseudo-randomness from both ends — constructions and applications. Examples include explicit construction of Ramsey graphs and of extractors for bit-fixing and zero-fixing sources as well as applications of pseudo-randomness to secure multiparty computation (MPC) and to matrix rigidity. Further, I am constantly trying to apply the deep theory of algebraic geometry to theoretical computer science. One such application concerns pseudorandom generators for low-degree polynomials.

I have been continuing to think about PCPs, parallel repetition and direct product & sum tests.

PCPs: with Kindler & Harsha we revisited the a 1998 paper by DFKRS on small soundness PCPs and pushed that a little further.

Parallel Repetition: In works with Steurer we studied analytical approach to parallel repetition and to product tests. Later with Vidick we extended this to the quantum setting.

Some other topics I've been involved in:

- with Shafi Goldwasser and Huijia Lin on complexity of "correlated instances"
- with Or Meir on lower bounds for formula size using "composition" along the lines of the KRW conjecture.

In the past few years I have been mostly working on locally decodable codes, private information retrieval and incidence problems in discrete geometry. Some highlights are:

2-server PIR with sub polynomial communication: In this work (joint with my student S. Gopi) we give the first 2-server information theoretic PIR scheme with sub polynomial communication, which bypasses the n^{1/3} barrier of Razborov and Yekhanin. Our schemes augment the recent MV codes based schemes (Yekahanin/Efremenko) but reduces the number of servers by employing partial derivatives.

Super quadratic lower bounds for 3-query LCCs over the reals: LCCs (locally correctable codes) are a stronger variant of LDCs (locally decodable codes) but even for this stronger class we have only very weak lower bounds (and only exponential upper bounds). In this work we consider the problem of linear 3-LCCs over the real numbers and prove a super quadratic lower bound on their encoding length (quadratic lower bounds for 3-query LDCs/LCCs are relatively straightforward and have been known for a while). Working over the reals preserves much of the original difficulty of the problem but makes it more accessible. We introduce several new ideas and techniques that could be of use elsewhere. (Joint with Saraf and Wigderson).

Sylvester-Gallai for arrangements of subspaces: The classic Sylvester-Gallai theorem says that, in a finite set of points in the plane, if every line passing through two of the points contains a third point, then all points are on the same line. Robust versions of this theorem are known to be closely related to 2-query LCCs (see here). In this work we prove generalization of this theorem where points are replaced by subspaces (three k-dim subspaces are ‘colienar’ if every pair spans the third). This corresponds to linear LCCs with 'blocks'. The main technical lemma is a higher dimensional generalization of a powerful theorem of Barthe.

I am working on the following direction

1) Interactive coding theory what includes:

a) list decoding of interactive communication

b) Coding for interactive communication in networks

2) I am also has been working on trying to understand which problems in arithmetic complexity could be solved with current techniques from algebraic geometry

My main area is algebraic complexity theory, with a focus on designing deterministic algorithms for the polynomial identity testing (PIT) problem (deciding whether a given algebraic circuit computes the zero polynomial). Much of my attention, joint with Shpilka, has been on PIT algorithms for read-once algebraic branching programs (an algebraic version of RL), and this led to some

applications to geometric complexity theory (starting from a discussion at last Oberwolfach). Derandomization for this algebraic version of RL (in analogy to the derandomization of RL) has involved a variety of "linear algebraic pseudorandom" objects, such as rank condensers. I worked with Guruswami to flush out a theory of these objects, and this resulted in a new construction of constant-degree dimension expanders. More recently, I have been thinking about the new lower bound technique known as "shifted partial derivatives" and have shown that one can develop new PIT algorithms using this method.

My research area is economics and computation, more specifically equilibrium analysis and computation. Apart from designing fast algorithms, recently I have worked on characterizing complexity of computing equilibrium in both markets and games, which leads to striking dichotomies (see Dichotomies in Equilibrium Computation and FIXP and ETR-hardness). For games, it is between 2 player games vs. more than 2 player games, and for markets, it is between separable vs. non-separable utility functions.

Gentry, Craig

In the last few years, while continuing to conduct research on property testing, I became involved in research on AC0. It started with my work with Avi Wigderson On the Size of Depth-Three Boolean Circuits for Computing Multilinear Functions, which later led to work with Avishay Tal on Matrix Rigidity of Random Toeplitz Matrices. Another direction pursued with Avi is On Derandomizing Algorithms that Err Extremely Rarely, which had a main component regarding AC0 and AC0[2], and led to a study of Randomness Extraction in AC0 (with Avi and Emanuele Viola). For a full list of my relatively recent publications see HERE.

Goldwasser, Shafi

Things I have been thinking about:

1) Pseudorandomness: Unconditional PRGs with logarithmic seed-length for restricted classes of bounded-space algorithms and small-depth circuits. Our approach gives near-optimal PRGs for halfspaces, combinatorial rectangles, read-once DNFs and more.

2) Boolean function analysis: I've been thinking about the degree sensitivity conjecture (with collaborators who apparently don't want to own up to it, so I wont name them). We recently showed that low-sensitivity functions have small NC circuits and other nice properties in line with the conjecture. We've also been exploring "robust" versions of the conjecture.

3) Coding theory: Codes with locality for data storage. The right notions of fault tolerance for such settings, explicit constructions and lower bounds.

Since the Nov 2012 meeting, I have continued working on coding theory and its interplay with pseudorandomness, and approximate optimization and PCPs.

In coding theory, we made some progress on

(i) polar codes, showing how it enables reliable communication on discrete memoryless channels with polynomial complexity in the gap to Shannon capacity; see the works with Xia (binary codes) and Velingker (general alphabets).

(ii) list decoding, via explicit constructions of subspace designs (with Kopparty, work initiated at last Oberwolfach), and their application to pruning the list in algorithms for algebraic-geometric and rank-metric codes (with Xing and Wang).

(iii) codes resilient to worst-case deletions, constructing codes in the high-rate and high-noise regimes (with Wang), low-redundancy codes to correct a fixed number of deletions (with Brakensiek and Zbarsky; even the two-deletion case was open), and binary codes to correct a deletion fraction approaching 1/3 (with Bukh).

(iv) non-malleables codes, where we identified their "capacity" (as a function of family size), and constructed efficient non-malleable codes of rate close to 1 for bit-tampering adversaries, and put forth (the subsequently fruitful) notion of non-malleable two-source extractors (with Cheraghchi).

(v) repairing Reed-Solomon codes in the context of regenerating codes, via a communication-efficient method to interpolate a polynomial's value at a point based on traces of its value at other points (with Wootters).

In approximability, we did some work on

(i) PCPs using the low-degree long code (renamed from short codes), and improved inapproximability results for hypergraph coloring and related problems (eg. see the works with Dinur and Harsha, Hastad, Srinivasan, Varma)

(ii) CSPs and hypergraph coloring (promise) problems where the notion of approximation is in the kind of predicates satisfied in the Yes and No cases (eg. finding a normal satisfying assignment when a balanced assignment exists); see the works with Austrin, Hastad and Lee). A specific fall out of this line of investigation is the seemingly new result (with Brakensiek, in writing) that 6-coloring 4-colorable graphs is NP-hard.

(iii) exporting ideas to communication complexity, applying influence-based decoding and invariance principles to study communication lower bounds when two parties share correlated randomness (with Canonne, Meka, Sudan)

My main area of interest is geometric complexity theory (lower bounds in algebraic complexity theory via algebraic geometry and representation theory) in a broad sense. I am interested in Valiant's determinant vs permanent problem, but also in how to apply methods from geometric complexity theory in other settings. For example with Peter Burgisser we used methods from geometric complexity theory to obtain lower bounds on the border rank of matrix multiplication, see Explicit Lower Bounds via Geometric Complexity Theory. My main tool is representation theory. With Burgisser and Huttenhain we were able to rule out a direct asymptotic approach to finding so-called occurrence obstructions using saturations of semigroups, see Permanent versus determinant: not via saturations.

With Gesmundo, Hauenstein and Landsberg we studied the geometry of matrix rigidity problems, see Complexity of Linear Circuits and Geometry.

Very recently with Mulmuley and Walter On vanishing of Kronecker coefficients we showed that there exist superpolynomially many partition triples in the Kronecker cone with vanishing Kronecker coefficient (and additional properties). The construction is explicit as required in the geometric complexity theory approach. Besides representation theory, classical boolean complexity theory (polynomial-time reductions between NP-complete languages) is used to prove this result. We also proved that deciding the positivity of Kronecker coefficients is NP-hard.

Kayal, Neeraj

Klauck, Hartmut

My main research interest is arithmetic circuit complexity. My most recent work in this area is on lower bounds for “simple” models such as sums of powers of low degree univariate polynomials. Obtaining optimal lower bounds is still quite challenging and may lead to new insights. In joint work with Kayal, Pecatte and Saha we obtained lower bounds of the order of the square root of the degree; this is one square root away from optimal.

Very recently, in joint work with Garcia-Marco we obtained optimal lower bounds for sums of powers of degree 1 real univariate polynomials. This problem is still challenging because the degree 1 polynomials may be raised to different powers. Our results are based on polynomial interpolation, and more precisely Birkhoff interpolation (a.k.a. “lacunary interpolation”). This seems to be a new lower bound method in complexity theory, and we hope that it will find further applications.

I've been working on interactive information theory and communication complexity. Recently, my focus was on interactive compression. In a sequence of papers with Anat Ganor and Ran Raz, we show exponential separations between (internal and external) information cost and communication complexity (see here, here and here). In a recent work, I give a new compression protocol that works for product distributions and has communication polynomial in the information cost of the original protocol.

Mehta, Ruta

In the recent years, I have been working mostly on circuit lower bounds. Specifically, I have studied ways to analyze Karchmer-Wigderson relations, which are communication problems that are tightly related to questions on depth and formula lower bounds. I have used these techniques to study an old conjecture of Karchmer, Raz, and Wigderson on the composition of functions.

This study has led to a joint work with Gavinsky, Weinstein and Wigderson on the composition of a function with the universal relation, as well as to an upcoming joint work with Dinur, which provides an alternative proof for the cubic lower bound on formulas.

I have also continued to work on PCPs and locally testable codes, together with Ben-Sasson, Kaplan, Kopparty, Ron-Zewi, Saraf and Stichtenoth (see here, here, and here).

I’ve been mainly working on pseudorandomness, explicit constructions, and other connected areas:

1) Getting PRGs better than Nisan/INW generators for small-space algorithms: these two papers are perhaps the most relevant (the last paper covers almost all ‘linear tests’). The main theme here is the technique of “gradually increasing independence” or “iteratively simplifying functions (ala random restrictions but not one-shot)”.

2) Sum-of-square lower bounds for planted clique and other problems.

3) Communication complexity: Can we characterize the CC of F = f(g^n), where f is a Boolean function and g is a ‘small’ two-party gadget - such as inner-product on b bits - in terms of a ‘query’ measure of f? A / recent work does this for most rectangle-based complexity measures (corruption, partition, etc.,) when g is not too ‘small’. Can the ‘small-ness’ (b = O(log n) vs O(1)) restriction be removed?

4) Explicit resilient functions: Ajtai-Linial in 1993 showed the existence of a depth-3 function resilient up to Omega(n/(log^2 n)) bad players. This paper gets an explicit function matching their work. Also includes a “sampler” preserving moment generating functions.

Three main topics I've worked on lately:

Quantum tomography: This is the quantum analogue of the classical problem of learning/testing probability distributions from samples. Together with John Wright, I've recently proved new lower and upper bounds for some of the fundamental problems: e.g., tomography and spectrum estimation (the analogue of distribution learning); testing being maximally mixed (the analogue of being the uniform distribution). After a short amount of representation theory, these quantum problems reduce to very interesting combinatorial/probabilistic problems concerning Young diagrams and longest increasing subsequences in random words.

Two relevant papers.

Easiness/hardness of random CSPs: Fix a type of CSP and then choose a uniformly random instance with n variables and m = m(n) constraints. Assuming m is sufficiently large, with high probability the CSP will be unsatisfiable (and in fact have optimum close to the random bound). The natural algorithmic task is to *refute* the instance (whp); i.e., output a proof of unsatisfiability. This becomes easier the large m(n) is. I am interested in understanding how the easiness threshold m(n) depends on the type of the CSP. I am also interested in the following similar distinct question: For a random CSP with a *planted* satisfying instance, how large must m(n) be before it's algorithmically easy to find the planted solution?

These sorts of problems have applications for cryptography in NC0 and for classic hardness-of-learning problems.

Two relevant papers.

Influences/query complexity/value maximization for low-degree Boolean functions: I've been thinking about these concepts in the context of DFKO and the two Aaronson — Ambainis papers, on the Aaronson--Ambainis Conjecture and on "Forrelation". I have some new results on these topics concerning beating the random assignment for CSPs and new results/applications on Decoupling (unpublished).

I am working on connections between communication complexity and distributed computing. The traditional focus of the communication complexity community is different from that of the distributed community (e.g.: number-on-forehead vs. number-in-hand, bounding the total communication cost vs. number of rounds, etc.) and the models are different as well, giving rise to new questions.

Recent work includes:

- Multi-party number-in-hand communication and information complexity. It turns out that information complexity gives us added leverage we need here. Together with Braveman, Ellen, Pitassi and Vaikuntanathan we showed tight lower bounds of Omega(nk) [STOC'13] and Omega(n*log k + k) [PODC'15], respectively, on the communication complexity of set disjointness in the message-passing and blackboard models.

- Applying communication complexity lower bounds to obtain lower bounds for various distributed problems (such as aggregation and spanning tree computation [DISC'11, PODC'12], subgraph detection [PODC'14] and others).

In the past couple of years, I have been working on lower bounds for extended formulations and Sum-of-Squares hierarchies.

1) Lower bounds on size of SDP extended formulations: In joint work with James Lee & David Steurer, we obtain exponential lower bounds for size of SDP formulations that solve TSP or 3-SAT or optimize over the cut polytope. We also show that for all constant d, among all SDP relaxations of size n^{d}, the O(d) round sum of squares SDP yields the optimal approximation.

2) SoS lower bounds for planted clique: Building on earlier works by Meka-Wigderson-Potechin and Deshpande-Montanari, we show that degree 4-SoS SDP hierarchy cannot detect a planted clique of size smaller than sqrt{n} in a Erdos-Renyi random graph.

I have been working on problems related to lowerbounds in communication complexity. Mostly I have been failing at proving new lowerbounds in the number-on-forehead model.

A few months ago Makrand Sinha and I simplified a beautiful proof of Ganor, Kol, Raz showing that the information complexity of computing functions can be exponentially smaller than the communication complexity. I would be happy to give a black board talk with the proof if there is interest. It is still not clear what the best way to simulate a protocol with low information is, and I am very interested in this question.

I shall not trouble this learned society with my work in continuous

and extremal combinatorics; in case you are curious anyway,

you can check out What is a Flag Algebra and the references therein. Closer to the point, I have resumed more or less systematic work in proof complexity:

An Ultimate Tradeoff in Propositional Proof Complexityis a spin-off of a larger text that, though, may or may not be available by the time of the workshop. Besides, I still love working

on exciting problems in complexity when an opportunity presents itself; Real Advantage and On the $AC^0$ Complexity of Subgraph Isomorphism are two recent examples.

Lattice algorithms: In our STOC 2015 paper we gave the fastest known algorithm for the Shortest Vector Problem in lattices. The result is based on sampling from the discrete Gaussian distribution. We have also worked on a question of interest in cryptography, showing how to recover short generators of principal ideals given a generator (this latter result might be too specialized for the audience?).

Geometry of lattices: for nearly 3 years now, Dadush and I have been working on trying to establish a converse to Minkowski's theorem on lattices. Informally, Minkowski's theorem says that "global density implies local density", and we are trying to prove that "local density implies global density". This is still work in progress, but I can report on some progress we made, such as showing that our conjecture implies a conjecture of Kannan and Lovasz. I can also describe the many implications of this conjecture, to complexity, additive combinatorics, etc.

Hardness of the non-commutative Grothendieck problem: in our FOCS 2015 paper we proved NP-hardness for this optimization problem. The proof is based on a nice mathematical construction of a subspace of the space of matrices.

Two recent lines of research I have been involved with:

1. New constructions of PRGs fooling models of low space (log n-wise almost independent hash functions epsilon-biased sequences, regular branching programs, generalizations of combinatorial rectangles, read-once CNFs and more) and hash functions for data structure applications (load balancing, Cuckoo hashing and the two-choice paradigm). These two lines of research turn out to be closely related. Relevant papers:

• Parikshit Gopalan, Raghu Meka, Omer Reingold, and David Zuckerman, Pseudorandom Generators for Combinatorial Shapes, in SIAM J. Comput. 42(3): 1051-1076 (2013). Preliminary version: STOC 2011

• Laura Elisa Celis, Omer Reingold, Gil Segev, and Udi Wieder, Balls and Bins: Smaller Hash Families and Faster Evaluation, in SIAM J. Comput. 42(3): 1030-1050 (2013). Preliminary version: FOCS 2011

• Parikshit Gopalan, Raghu Meka, and Omer Reingold, DNF Sparsification and a faster deterministic counting algorithm, in Computational Complexity 22(2): 275-310 (2013). Preliminary version in CCC 2012

• Parikshit Gopalan, Raghu Meka, Omer Reingold, Luca Trevisan, and Salil Vadhan, Better pseudorandom generators from milder pseudorandom restrictions, in FOCS 2012

• Omer Reingold, Ron D. Rothblum, Udi Wieder: Pseudorandom Graphs in Data Structures. ICALP (1) 2014: 943-954

• Raghu Meka, Omer Reingold, Guy N. Rothblum, Ron D. Rothblum: Fast Pseudorandomness for Independence and Load Balancing, ICALP (1) 2014: 859-870

• Raghu Meka, Omer Reingold, Yuan Zhou: Deterministic Coupon Collection and Better Strong Dispersers. APPROX-RANDOM 2014: 872-884

2. Protecting accuracy of adaptive statistical analysis via a somewhat surprising application of differential privacy. Along the way we study a notion that we refer to as max-information and its relation to generalizability in learning. Relevant papers:

Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Leon Roth: Preserving Statistical Validity in Adaptive Data Analysis. STOC 2015: 117-126

Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Leon Roth: The reusable holdout: Preserving validity in adaptive data analysis, Science 7 August 2015: Vol. 349 no. 6248 pp. 636-638

Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth: Generalization in Adaptive Data Analysis and Holdout Reuse, to appear in NIPS.

Recent and ongoing work:

— Applying Hastad’s switching lemma in a new way to show a tight exp(Omega(d*n^{1/d})) lower bound on depth d+1 formulas computing PARITY.

— Generalizations of the switching lemma using random projections (joint work with Servedio and Tan)

— Pathset Complexity: a method for proving formula-size lower bounds for the average-case st-connectivity problem. Results so far for AC0 and monotone-NC1.

— Investigating the relationship between the complexity of subgraph isomorphism problems and parameters like treewidth and treedepth (joint work with Li and Razborov)

My main research interest is in methods for allowing a computationally weak client to outsource its computation to an untrusted server. In a recent sequence of works, I have studied such methods that allow the client to verify the correctness of a computation performed by the server, such that the client runs in only linear-time or even in sub-linear* time and the prover runs in polynomial-time.

(* Here the soundness guarantee is only for inputs that are far from the language. This setting can be thought of as the interactive-proof analog of property testing.)

Some recents results from this line of work are:

(1) Linear-time verification for any language in P with only 1-round of communication (together with Yael Kalai and Ran Raz).

(2) A study of sublinear-time verification given a *non-interactive* proof (together with Tom Gur).

(3) polylogarithmic-time verification for Context-Free languages and languages accepted by poly-size Read-Once Branching Programs (together with Oded Goldreich and Tom Gur).

(4) \tildeO{n^{2/3}}-time verification for languages in P and a corresponding \Omega{n^{1/2}} lower bound (together with Yael Kalai).

I'd be happy to give a talk surveying this line of work or a more specific talk about one or more of these results.

In recent years I've been thinking mostly about some algorithmic problems.

Very efficient approximation algorithms for longest increasing subsequence and related

problems (Ulam distance) in random access and streaming models (e.g.

SODA 2015 paper

Lower bounds for file maintenance/online labeling problem (

BKS1,

BBCKS,

BKS2,

Noisy population recovery: Recently showed (with Anindya De and Sijian Tang)

that noisy population recovery can be done in polynomial time (if the noise is

not too large), building on work of Lovett and Zhang.

In combinatorial complexity: Justin Gilmer, Michal Koucky and I proposed

an approach to the sensitivity conjecture based on a communication game.

GKS

Schnorr, Claus-Peter

Some recent interests have been:

— Small depth circuit complexity: I've worked on obtaining an average-case extension of Hastad/Yao's depth hierarchy for Boolean circuits; on lower bounds for small-depth circuits that compute small-distance connectivity (not yet published); and on showing that constant-depth monotone circuits of MAJ gates can't efficiently simulate high-weight monotone linear threshold functions. (Joint with Chen, Oliveira, Rossman, and Tan in various combinations.)

— Boolean function property testing: I've been interested in Boolean function property testing for a while now; some of my more recent works in this area have been on monotonicity testing and junta testing. Some relevant papers. (Joint with Chen, De, Tan and Wright in various combinations.)

— Learning and testing probability distributions: I've worked on various problems involving learning different types of distributions (over {0,1}^n, over the integers, and over [0,1]). (Joint with Chan, Daskalakis, De, Diakonikolas, O'Donnell, Sun, and Tan in various combinations.)

Reed-Muller codes: In a joint work with Abbe and Wigderson we studied the behavior of Reed-Muller codes under random errors and erasures and proved that RM codes achieve capacity for the erasure channel at low and high rate.With Saptharishi and Volk we gave a decoding algorithm for RM codes from random errors.

Polynomial identity testing: In a joint work with Kopparty and Saraf we proved that deterministic factoring of multivariate polynomials is equivalent to derandomizing polynomial identity testing. With Oliveira and Volk we obtained a subexponential PIT for depth-3 multilinear circuits. With Forbes and Saptharishi we obtained PIT for read-once algebraic branching programs with unknown read order. This was later improved by Agrawal et al.

My work related to this workshop is mostly in Property Testing in sparse graphs. Jointly with Artur Czumaj and Pan Peng I developed a tester for k-clusterings of graphs that extends earlier work on testing expanders. I also studied some relations between two property testing models in directed graphs and obtained some results on the problem of finding small graphs that have approximately the same distribution of k-disks (subgraphs induced by vertices of distance at most k) as a bounded degree input graph. Together with Hendrik Fichtenberger, Pan Peng and I obtained explicit bounds for the size of the small graph when the input graph has high girth.

In the last couple of years, I have been investigating the sum-of-squares method, especially in the context of the Unique Games Conjecture, lower bounds for (semidefinite) extended formulation, and machine learning.

1. A joint work with Lee and Raghavendra shows exponential lower bounds on algorithms based on semidefinite programming relaxations for basic combinatorial optimization problems, like the Max Cut and Traveling Salesman problems. The proof shows that for every CSP, the sum-of-squares method gives an optimal polynomial time approximation algorithm among polynomial time algorithms based on SDP relaxations. The proof also shows some connections between SDP based algorithms and quantum information theory.

2. Joint works with Barak, Hopkins, Kelner, Schramm, and Shi use the sum-of-squares method to give new algorithms and lower bounds for problems about tensors and polynomials that play important roles in (unsupervised) machine learning. Some of the results indicate a surprising connection between SOS-based algorithms and algorithms used in practice.

In the past few years, I have been working mostly on Communication Amid Uncertainty - various problems where communicating players have to overcome some uncertainty about each other in addition to satisfying some standard goal of communication. Most recent problems are set in the framework of Communication Complexity (a la Yao). See Communication Complexity of Permutation-Invariant Functions, Communication with Contextual Uncertainty, and Communication with Imperfectly Shared Randomness for some of this line of work. I am also continuing to work on testing of algebraic properties, especially "lifted codes''. Absolutely Sound Testing of Lifted Codes and Robust testing of lifted codes with applications to low-degree testing

De Morgan Formulae: I have been interested in Lower Bounds for De Morgan Formulae, with the ultimate goal of separating NC1 from P. I gave a new proof for Håstad’s result showing that the shrinkage exponent of De Morgan Formulae is 2. In a joint work with Komargodski and Raz, we gave average-case lower bounds for the formula-size of explicit functions, matching the best known worst-case hardness.

Small Depth Circuits: In a joint work with Goldreich we prove new rigidity bounds for random Toeplitz matrices. This proves a exp(n^{0.6}) lower bound for an explicit function in a restricted model of depth-3 circuits, which was initiated by Goldreich and Wigderson. In a recent work, I proved tight bounds on the Fourier tails of AC0 circuits, improving the seminal result of Linial, Mansour and Nisan.

Two topics I've worked on recently:

Small-depth circuit complexity: Together with Rossman and Servedio, I've worked on obtaining an average-case extension of Håstad/Yao's depth hierarchy theorem for Boolean circuits. A key ingredient in our proof is a notion of random projections which generalizes random restrictions. In recent work with Chen, Oliveira, and Servedio (not yet published), we apply the method of random projections to obtain near-optimal small-depth lower bounds for the small distance connectivity problem. I've also worked on the DNF complexity of approximating Boolean functions with Blais, Håstad, and Servedio (two papers).

Property testing of Boolean functions: Together with Chen, De, and Servedio, we've obtained a near-optimal lower bound on the nonadaptive query complexity of monotonicity testing using multidimensional CLTs (two papers). Together with Servedio and Wright we have also shown that adaptive testers for juntas are more powerful than nonadaptive ones, via a new lower bound for nonadaptive testers.

I've also worked on discrete Fourier analysis and its applications in complexity theory.

My main area is arithmetic circuit complexity. In particular, I mainly studied lower and upper bounds for some constant depth models. In the first part, I considered some univariate models. One can hope lower bounds are easier as this model is simpler. I focused on some versions of the real tau-conjecture. With Koiran and Portier, we showed some non-trivial upper bounds for the number of real zeros of univariate sums of powers of sparse. I am also interested in some combinatorial versions of this conjecture, like the size of the convex hull of Newton polygon (joint work with Koiran, Portier and Thomassé) or the log-concavity of the coefficients (joint work with Koiran and Garcia).

On the other part, in the non-univariate case, I got some optimal upper bounds for the size of homogeneous depth-4 circuits for any polynomial in VP. Moreover, we showed with Mahajan and Saurabh than exponential sums do not add power if the epsilon-variables are constrained to be multilinear. Recently, I worked with Kayal and Saha on lower bounds (via some « shifted partial derivatives ») for some models.

Trevisan, Luca

Over the past few years I have been working on differentially private data analysis, trying to understand how can we analyze a dataset of sensitive information without compromising the privacy of the individuals who contribute to that dataset. Much of this work is concerned with hardness results and lower bounds for differentially private algorithms (e.g. here and here), and the main technique we've developed here is the use of cryptographic tools like fingerprinting codes and traitor-tracing schemes to understand the limits of privacy.

I've also recently become interested in the problem of adaptive data analysis, where we want to ensure statistical validity of our conclusions even when the choice of statistic/algorithm/query to compute on the dataset can depend on previous interactions with the same data. This problem has a somewhat surprising connection to differential privacy, and we were able to extend the hardness results for differential privacy to show strong computational bottlenecks to ensuring statistical validity in adaptive data analysis (e.g. here and here).

Research report goes here. Here is how to link to a webpage.

In recent years, most of my work has been on Differential Privacy, which is a framework for enabling statistical analysis of datasets with sensitive information about people while providing strong privacy protections to the individuals in the dataset. On the complexity side of differential privacy, a recent focus has been on proving lower bounds for differential privacy using cryptographic constructs such as fingerprinting codes (joint works with Bun and Ullman, and with Bun, Nissim, and Stemmer), as well as construct new attacks on releases of aggregate statistics (joint work with Dwork, Smith, Steinke, and Ullman). It turns out that computational complexity also arises in composition theorems for differential privacy (joint work with Jack Murtagh).

I continue to also think about pseudorandomness, particularly models related to space-bounded computation. Some recent efforts in this vein are joint works with Reingold and Steinke, and with Steinke and Wan.

I have been working on several questions related to computing on encrypted data, e.g., fully homomorphic encryption, functional encryption, and program obfuscation. Two recent, representative, results:

- At FOCS'15, with Nir Bitansky, we showed how to construct indistinguishability obfuscation from any functional encryption scheme [http://eprint.iacr.org/2015/163.pdf]
- At CRYPTO'15, with Hoeteck Wee and Sergey Gorbunov, we showed how to construct a one-sided version of functional encryption under the hardness of Regev's "learning with errors" problem (which, in turn, is as hard as approximating shortest vectors in worst-case integer lattices) [http://people.csail.mit.edu/vinodv/predenc.pdf]

These two results together bring us (apparently) closer to constructing indistinguishability obfuscation from well-studied hardness assumption.

I also think about a handful of other topics in cryptography and complexity.

Just an example: Can we base the "security" of block ciphers (e.g., the Advanced Encryption Standard) under well-defined complexity assumptions? This relates to Rackoff's long-standing conjecture that the composition of sufficiently many "simple" permutations results in a pseudo-random permutation, and is also related to questions in the representation theory of symmetric groups.

Another example: Can we base (any form of) cryptography on NP-hardness? An observation along these lines (with Tianren Liu, to appear in TCC 2016) is that the security of (single server) private information retrieval protocols cannot be based on NP-hardness (under black-box reductions).

Here are some of the topics I have worked on since the last Oberwolfach meeting.

- SoS (Sum of Squares) lower bounds. We still know little on the limits of this general algorithmic technique. Some lower bounds are here Meka-Potechin-W (on planted clique) and Ma-W (on sparse PCA)

- Reed-Muller codes are old and familiar. But their behavior on random noise was far from understood. In Abbe-Shpilka-W we showed that they achieve capacity in very high and low rates, for erasures and errors. Shortly after came the breakthrough of Pfister and Kumar that they achieve capacity for constant rate for erasures. The main problem remains constant-rate errors.

- Some attempts to better understand AC^0 are, for randomness extraction in Goldreich-Viola-W and for lower bounds above exp(\sqrt{n}) here Goldreich-W (with beautiful progress by Goldreich-Tal on the model presented above)

- Some progress on relating VC-dimension to other standard learning concepts is here Moran-Shpilka-W-Yehudayoff. We still don’t really understand sets of VC-dimension = 2!!

- The KRW-conjecture about composition of formulas can prove that P != NC^1 avoiding the Natural Proofs barrier. Small progress towards it is here Gavinsky-Meir-Weinstein-W, with further progress very recently made by Dinur-Meir.

- In Dvir-Saraf-W we succeeded in breaking the quadratic barrier for 3-query locally correctable codes to n^{2.1}…but to the best of anyone’s knowledge, such codes may not exist at all (they don’t for 2-queries).

- In Hrubes-W we initiated the study of non-commutative circuits and formula with division (there is no known efficient elimination of divisions as in the commutative case). In ongoing work with Garg and Olivera we have extended Nisan’s exponential non-commutative formula lower bound to formulas with division (unless PH collapses). This entails a deterministic polynomial time algorithm for identity testing of non-commutative rational expressions.

Over the past couple of years, I have participated in several threads of research connecting algorithm design and complexity lower bounds. Here are some results.

- With Rahul Santhanam, I showed some new uniform circuit lower bounds, such as: there is a language in P that is not in P-uniform SIZE[O(n)]. We also used structural properties of low-depth circuits to prove things like: if NC1 is in nonuniform TC0, then NC1 is in subexponential-time uniform TC0 (On Medium-Uniformity and Circuit Lower Bounds).

- I have extended previous work to show that NEXP needs super polynomial size ACC circuits even when one allows a layer of linear threshold gates at the bottom. At the heart is a faster algorithm for counting SAT assignments to such circuits (New Algorithms and Lower Bounds for Circuits With Threshold Gates).

- With student Cody Murray, I have shown several new results on the hardness (or lack thereof) of the Minimum Circuit Size Problem. For example, we prove the problem is not AC0[2]-hard under polylogarithmic time reductions, and if it is NP-hard under polytime reductions then EXP != ZPP (On the (Non) NP-hardness of Computing Circuit Complexity).

- Investigating closely how the polynomial method can be applied to speed up algorithms, I have improved on several longstanding running times of several core problems. The highlights include Faster All-Pairs Shortest Paths via Circuit Complexity, More Applications of the Polynomial Method to Algorithm Design (with students Amir Abboud and Huacheng Yu), and Probabilistic Polynomials and Hamming Nearest Neighbors (with student Josh Alman). The first two use very careful applications of Razborov-Smolensky and fast matrix multiply to speed up repetitive subroutines; the last one constructs an optimal degree probabilistic polynomial for the majority function, and uses that to solve Closest Pair in Hamming space in subquadratic time (for spaces with slightly superlogarithmic dimension).

In the last couple of years I’ve been working in the following areas:

Randomness extraction: explicit extractors for 2 independent sources with polylogarithmic min-entropy, based on new explicit resilient functions; extractors for additive sources, which are sources with additive structure; and extractors for interleaved sources, which are independent sources whose bits or symbols are interleaved.

Applications of pseudorandomness to coding theory and cryptography: building non-malleable extractors to construct non-malleable codes, which can handle severe types of corruption; and building robust pseudorandom generators, which have a pseudorandom output except for a few bits even when an adversary controls several wires of the implementation, and which have applications to cryptography.

Lower bounds: a new method for proving lower bounds on the communication complexity of composed functions, based on a structural theorem that each rectangles in the communication matrix is a nonnegative combinations of juntas; and a method to use known and new circuit lower bounds to obtain compression algorithms.