Workshop Program

Sunday, Feb. 8

8:00Shuttle from Hilton Chicago to TTIC
8:30-9:00Breakfast
9:00-9:40James Lee. Sparsest Cut and related issues.
9:40-10:20Subhash Khot. Optimal Long Code Test with One Free Bit
10:20-10:40Break
10:40-11:20Konstantin Makarychev. Integrality Gaps for Pricing Items for Single-Minded Bidders.
11:20-12:00David Steurer. New Multicut Approximations from Unique Games Algorithms.
12:00-13:30Lunch
13:30-14:10 Vijay Vazirani. Combinatorial Approximation Algorithms for Convex Programs?!
14:10-14:50Avrim Blum. Approximate Clustering without the Approximation Algorithm.
14:50-15:30Anupam Gupta. Stochastic Analysis of Online Problems.
15:30-16:00Break
16:00-16:40Sanjeev Khanna. Approximation Algorithms for Vertex-Connectivity Survivable Network Design.
16:40-17:20Joseph Cheriyan. Approximation Algorithms for Network Design.
17:20-18:00Rajsekar Manokaran. Inapproximability of Maximum Acyclic Subgraph
18:20Shuttle to Hilton Chicago hotel

Monday, Feb. 9

8:00Shuttle from Hilton Chicago to TTIC
8:30-9:00Breakfast
9:00-11:00Dana Moshkovitz. Two Query PCP with Sub-Constant Error.
11:00-11:20Break
11:20-12:00Ryan O'Donnell. Zwick's Conjecture is implied by most of Khot's Conjectures.
12:00-13:30Lunch
13:30-14:10Moses Charikar. MaxMin Allocation via Degree Lower-Bounded Arborescences.
14:10-14:50Matthew Andrews. Edge-Disjoint Paths via Raecke Decompositions.
14:50-15:30Nitish Korula. A Graph Reduction Step Preserving Element-Connectivity and Applications.
15:30-16:00Break
16:00-16:40Nikhil Bansal. Unsplittable Flow on Line Graphs.
16:40-17:20Irit Dinur. 2-query PCP Composition.
18:00Shuttle downtown
18:30Dinner

Tuesday, Feb. 10

8:00Shuttle from Hilton Chicago to TTIC
8:30Breakfast
9:00-9:40Prasad Raghavendra. How to Round Any CSP.
9:40-10:20Yury Makakrychev. Integrality Gaps for Sherali-Adams Relaxations.
10:20-10:40Break
10:40-11:20Samir Khuller. On Broadcast Scheduling.
11:20-12:00Seffi Naor. The Primal-Dual method in Online Computation.
12:00-13:30Lunch
13:30-14:10Gagan Goel. A key primitive for Internet ad auctions: Budgeted Allocation.
14:10-14:50Chandra Chekuri. Orienteering and variants: a mini-survey and open problems.
14:50-15:30Nisheeth Vishnoi. Learning Approximately Sparse Cuts in Graphs Quickly.
16:00Shuttle to Hilton hotel.


Abstracts

Sparsest Cut and related issues
James Lee

I will discuss integrality gaps for Sparsest Cut, why there has been little progress on them in the last 3 years, and possible directions for future work. I will start by presenting a simple example/analysis for Sparsest Cut with uniform demands, which achieves the best-known integrality gap.

Optimal Long Code Test with One Free Bit
Subhash Khot

For arbitrarily small constant \eps > 0, we present a long code test with one free bit, completeness 1-\eps and soundness \eps. Using the test, we prove the following two optimal inapproximability results:
1. Assuming the Unique Games Conjecture, given an n-vertex graph that has two disjoint independent sets of size (1/2-\eps)n each, it is NP-hard to find an independent set of size \eps n.
2. Assuming a (new) stronger version of the Unique Games Conjecture, the scheduling problem of "minimizing average waiting time on a single machine with precedence constraints" is inapproximable within factor 2-\eps.
Joint work with Nikhil Bansal.

Integrality Gaps for Pricing Items for Single-Minded Bidders
Konstantin Makarychev

We consider the following item pricing problem which has received much attention recently. A seller has infinite numbers of copies of n items. There are m buyers, each with a budget and an intention to buy a fixed subset of items. Given prices on the items, each buyer buys his subset of items, at the given prices, provided the total price of the subset is at most his budget. The objective of the seller is to determine the prices such that her total revenue is maximized.
We focus on the case where the buyers are interested in subsets of size at most two. Balcan and Blum gave a factor 4 approximation algorithm for this problem. We show that a natural linear programming relaxation of this problem has an integrality gap of 4. We then constract an SDP gap of 1.138 and prove 1.138 UGC hardness.
Joint work with Rohit Khandekar, Tracy Kimbrel, and Maxim Sviridenko.

New Multicut Approximations from Unique Games Algorithms
David Steurer

We show a tight connection between Unique Games and Multicut.
On the one hand, we give an approximation preserving reduction from so-called linear unique games to multicut with degree-bounded demands. The reduction is simple and its analysis does not rely on the usual Fourier-analytic tools.
On the other hand, rather surprisingly, we can transform many of the recently discovered unique games algorithms into new approximation algorithms for multicut with degree-bounded demands. For instance, one of our algorithms achieves an approximation ratio of sqrt(log n)*loglog n on a class of instances with flow-cut gap log n.
This connection also suggests that a disproval of the unique games conjecture would lead to better approximation algorithms for general multicut instances (without bounds on the demand degrees).
Joint work with Nisheeth Vishnoi.

Combinatorial Approximation Algorithms for Convex Programs?!
Vijay Vazirani

Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. This theory touches on such disparate themes as market equilibria, Nash bargaining games, TCP congestion control and, most relevant to this talk, the efficient exact solvability of certain nonlinear programs by combinatorial means.
We will argue that the last aspect represents the tip of an iceberg and that the way to move forward is to find a systematic way of designing combinatorial approximation algorithms for solving certain classes of convex programs.

Approximate Clustering without the Approximation Algorithm
Avrim Blum

There has been substantial work on approximation algorithms for clustering data under distance-based objective functions such as k-median, k-means, and min-sum objectives. This work is fueled in part by the hope that approximating these objectives well will indeed yield more accurate solutions. That is, for problems such as clustering proteins by function, or clustering images by subject, there is some unknown correct ``target'' clustering and the implicit assumption is that clusterings that are approximately optimal in terms of these distance-based measures are also approximately correct in terms of error with respect to the target.
In this work we show that if we make this implicit assumption explicit---that is, if we assume that any c-approximation to the given clustering objective Phi is epsilon-close to the target---then we can produce clusterings that are O(epsilon)-close to the target, even for values c for which obtaining a c-approximation is NP-hard. In particular, for k-median and k-means objectives, we show that we can achieve this guarantee for any constant c > 1, and for the min-sum objective we can do this for any constant c > 2. Our results also highlight an interesting conceptual difference between assuming that the *optimal* solution to these objectives is close to the target, and assuming that any *approximately optimal* solution is close. In the former case, finding a solution that is O(epsilon)-close the target remains computationally hard, and yet for the latter we have an efficient algorithm.
This is joint work with Nina Balcan and Anupam Gupta.

Stochastic Analysis of Online Problems
Anupam Gupta

Online algorithms are usually studied in the framework of competitive analysis, which looks at the ratio of the algorithm's cost to the cost of the optimal solution, for the *worst* possible input sequence. But what if we care about the average-case instead of the worst-case?
For example, consider the Online Steiner Tree problem: in the worst-case setting, the Greedy algorithm is an O(log n) competitive algorithm, and this is the best possible. We will obtain better guarantees when the input sequence is not chosen by an adversary, but are just draws from some distribution over the vertices of the graph. We will also consider the online Set Cover problem, and show an average-case bound of O(log m + log n), beating the O(log m log n) worst-case results.
This is based on joint works with Naveen Garg, Fabrizio Grandoni, Stefano Leonardi, Pauli Miettinen, Piotr Sankowski, and Mohit Singh.

Approximation Algorithms for Vertex-Connectivity Survivable Network Design
Sanjeev Khanna

In the Survivable Network Design problem (SNDP), we are given an undirected graph with costs on edges, along with a connectivity requirement for each pair of vertices. The goal is to find a minimum-cost subset of edges, that satisfies the given set of pairwise connectivity requirements. In the edge-connectivity version we need to ensure that there are $r(u,v)$ edge-disjoint paths for every pair $u, v$ of vertices where $r(u,v)$ denotes the connectivity requirement for the pair $u,v$. In the vertex-connectivity version the paths are required to be vertex-disjoint. The edge-connectivity version of SNDP is known to have a 2-approximation. However, no non-trivial approximation algorithm has been known so far for the vertex version of SNDP, except for special cases of the problem. We present an extremely simple algorithm to achieve an $O(k^3 \log n)$-approximation for this problem, where $k$ denotes the maximum connectivity requirement, and $n$ denotes the number of vertices. We also give a simple proof of the recently discovered $O(k^2 \log n)$-approximation result for the single-source version of vertex-connectivity SNDP. We note that in both cases, our analysis in fact yields slightly better guarantees in that the $\log n$ term in the approximation guarantee can be replaced with a $\log \tau$ term where $\tau$ denotes the number of distinct vertices that participate in one or more pairs with a positive connectivity requirement.
This is joint work with Julia Chuzhoy.

Approximation Algorithms for Network Design
Joseph Cheriyan

Given a graph with costs on the edges, find a subgraph of minimum cost that satisfies some pre-specified connectivity requirements. This is a typical problem in network design. The talk will survey some recent results (positive and negative) on approximation algorithms for NP-hard problems in this area.

Inapproximability of Maximum Acyclic Subgraph
Rajsekar Manokaran

Given a directed acyclic graph G, one can efficiently order (topological sort) its vertices so that all edges go forward from a lower ranked vertex to a higher ranked vertex. But what if a few, say epsilon fraction, of the edges of G are reversed? Can we detect these "errors" and find an ordering with few back edges? Formally, given a directed graph whose vertices admit an ordering with many, i.e., 1-epsilon fraction, forward edges, can we find a good ordering with fraction 1 - o(1) of forward edges? This is equivalent to finding a subgraph of G that is acyclic and has many edges, and hence this problem is called the Maximum Acyclic Subgraph problem.
It is trivial to find an ordering with fraction 1/2 of forward edges: take the better of an arbitrary ordering and its reverse. Despite much effort, no efficient c-approximation algorithm for a constant c greater than 1/2 has been found for the Maximum Acyclic Subgraph problem.
In this talk, we will prove a strong hardness result that rules out the existence of such an approximation algorithm assuming the Unique-Games conjecture.
Joint work with Venkatesan Guruswami and Prasad Raghavendra.

Two Query PCP with Sub-Constant Error
Dana Moshkovitz

We show that the NP-Complete language 3Sat has a PCP verifier that makes two queries to a proof of almost-linear size and achieves sub-constant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.
Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size. There were also PCP Theorems with sub-constant error and almost-linear size, but a constant number of queries that is larger than 2.
As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following:
1) 3Sat cannot be efficiently approximated to within a factor of 7/8+o(1), unless P=NP. This holds even under almost-linear reductions. Previously, the best known NP-hardness factor was 7/8+epsilon for any constant epsilon>0, under polynomial reductions (Hastad).
2) 3Lin cannot be efficiently approximated to within a factor of 1/2+o(1), unless P=NP. This holds even under almost-linear reductions. Previously, the best known NP-hardness factor was 1/2+epsilon for any constant epsilon>0, under polynomial reductions (Hastad).
3) A PCP Theorem with amortized query complexity 1+o(1) and amortized free bit complexity o(1). Previously, the best known amortized query complexity and free bit complexity were 1+epsilon and epsilon, respectively, for any constant epsilon>0 (Samorodnitsky and Trevisan).
One of the new ideas that we use is a new technique for doing the composition step in the (classical) proof of the PCP Theorem, without increasing the number of queries to the proof. We formalize this as a composition of new objects that we call Locally Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous works, and we make it explicit in this work. We believe that the formulation of LDRCs and their construction are of independent interest. This is joint work with Ran Raz.

Zwick's Conjecture is implied by most of Khot's Conjectures
Ryan O'Donnell

In 1998 Zwick proved that P = naPCP_{1, 5/8}(O(log n), 3) and conjectured that naPCP_{1, 5/8 + eps}(O(log n), 3) = NP for all eps > 0. Hastad's contemporary result NP = naPCP_{1, 3/4 + eps}(O(log n), 3) was not improved until 2006, when Khot and Saket lowered the 3/4 to 20/27.
We prove Zwick's 5/8 Conjecture, assuming Khot's "d-to-1 Conjecture" for any constant d. The necessary Long Code testing analysis uses a mix of older (Hastad-type) and new (Mossel-type) ideas.
This is joint work with Yi Wu of Carnegie Mellon University.

MaxMin Allocation via Degree Lower-Bounded Arborescences
Moses Charikar

We consider the problem of MaxMin allocation of indivisible goods. There are m items to be distributed among n players. Each player i has a nonnegative valuation p_ij for an item j, and the goal is to allocate each item to a player so as to maximize the minimum total value received by a player. There is a large gap in our understanding of this problem. The best known positive result is an tilde{O}(sqrt n)-approximation algorithm, while there is only a factor 2 hardness known.
Better algorithms are known for the restricted assignment case where each item has exactly one nonzero value for the players. We study the effect of bounded degree of items: each item has a nonzero value for at most B players. We show that essentially, the case B=3 is equivalent to the general case, and give a 4-approximation algorithm for B=2.
The current algorithmic results for MaxMin Allocation are based on a complicated LP relaxation called the configuration LP. We present a much simpler LP which is equivalent in power to the configuration LP. We focus on a special case of MaxMin Allocation - a family of instances on which this LP has a polynomially large gap. The technical core of our result for this case comes from an algorithm for an interesting new optimization problem on directed graphs: MaxMinDegree Arboresence, where the goal is to produce an arboresence of large outdegree. We develop an n^eps approximation for this problem that runs in n^{O(1/\eps)} time and obtain a a polylogarithmic approximation that runs in quasipolynomial time, using a lift-and-project inspired LP formulation. In fact, we show that our results imply a rounding algorithm for the relaxations obtained by t rounds of the Sherali-Adams hierarchy applied to a natural LP relaxation for the problem. Roughly speaking, the integrality gap of the relaxation obtained from t rounds of Sherali-Adams is at most n^{1/t}.
Joint work with MohammadHossein Bateni and Venkatesan Guruswami

Edge-Disjoint Paths via Raecke Decompositions
Matthew Andrews

In this talk we will describe how to obtain approximation ratios for the Edge-Disjoint Paths (EDP) problem that depend on the structure of the Raecke decomposition of the graph. In particular we show that if the subclusters of any cluster have similar outdegrees then we can obtain a quasipolylogarithmic approximation for EDP with a poly(loglog) increase in congestion.

A Graph Reduction Step Preserving Element-Connectivity and Applications
Nitish Korula

Given an undirected graph G = (V , E) and a set of terminals T \subseteq V , the element-connectivity of two terminals u, v \in T is the maximum number of u-v paths that pairwise share no edges and no non-terminals. The element-connectivity between a pair of terminals is at least their edge-connectivity and at most their vertex-connectivity; being more general than the former but less general than the latter, element-connectivity can help bridge the gap between edge- and vertex-connectivity problems. We show that a graph reduction operation of Hind and Oellerman preserves the element-connectivity of all pairs of terminals; repeated use of it leads to a {\it bipartite} graph with the same pairwise element-connectivities. We give two applications of this reduction step to connectivity and network design problems: 1. Given a graph G and disjoint terminal sets T_1 , T_2 , . . . , T_h, we seek to pack a maximum number of element-disjoint Steiner forests where each forest connects each T_i . (Two forests are element-disjoint if they share no edges or non-terminals.) Lap Chi Lau showed that k {\em edge-connectivity} between the terminal set T implies the existence of \Omega(k) {\em edge-disjoint} Steiner Trees; extending an algorithm of Cheriyan and Salavatipour, we prove that if each T_i is k-element-connected then there exist \Omega( k / log h log |T| ) element-disjoint Steiner forests. If G is planar, we show that one can find \Omega(k) Steiner forests; this extends to graphs of bounded genus or treewidth. These are the first non-trivial algorithms for packing element-disjoint Steiner forests.
2. We give a very short and intuitive proof of a spider-decomposition theorem of Chuzhoy and Khanna in the context of the single-sink k-vertex-connectivity problem; this yields a simple and alternative analysis of an O(k log n) approximation.
Joint work with Chandra Chekuri

Unsplittable Flow on Line Graphs
Nikhil Bansal

In the unsplittable flow problem on a line (also referred to as Bandwidth Allocation or Call Admission Control), we are given a set of n tasks, each specified by a start time s_i, end time t_i, a demand d_i and a profit p_i. A task, if accepted, requires d_i units of bandwidth form s_i to t_i and accrues p_i profit. For every time t, we are also specified the available bandwidth c_t. The goal is to find a feasible maximum profit subset of tasks.
I will describe a logarithmic approximation for the problem. This is also the first o(n) approximation for it. I will also discuss various related previous works and techniques.

2-query PCP Composition
Irit Dinur

In a recent breakthrough, Moshkovitz and Raz proved that NP can be verified using two queries, sub-constant error, and nearly linear proof length. The main technical component of their result is a new composition theorem for certain specific 2-query PCPs. All earlier composition theorems suffered from incurring an additional query per composition.
We prove a general 2-query PCP composition theorem with low error. More formally, we define a certain (natural) type of PCP verifier and show how to compose such verifiers without incurring an extra query. Our composition works regardless of the way the verifiers themselves are constructed.
This composition theorem is then used to give a simpler and modular proof of the results of [MR] mentioned above. This is done by an iterative application of the composition theorem, using known components (namely, the "folklore" manifold vs. point construction). Our construction suffers from the same bottleneck as [MR] with respect to the error probability, in that it is inverse polynomial (not exponential) in the alphabet length.
Joint work with Prahladh Harsha

How to Round Any CSP
Prasad Raghavendra

A large number of interesting combinatorial optimization problems like Max3SAT, MaxCut fall under the class of constraint satisfaction problems. Recent work identifies a semidefinite programming relaxation that yields the optimal approximation ratio for every CSP, under the Unique Games Conjecture. We present an efficient rounding scheme that achieves the integrality gap of this optimal SDP, unconditionally for every CSP. This optimal SDP relaxation is stronger or equivalent to any relaxation used in literature to approximate a CSP. Thus, irrespective of the truth of UGC, our work yields a generic algorithm that for every CSP, achieves an approximation at least as good as the best known algorithm in literature.
Our approach is simple in that it avoids the use of the machinery from unique games reductions such as dictatorship tests, Fourier analysis or the Invariance principle.
Joint work with David Steurer (Princeton)

Integrality Gaps for Sherali-Adams Relaxations
Yury Makarychev

We prove strong lower bounds on integrality gaps of Sherali--Adams relaxations for MAX CUT, Vertex Cover, Maximum Acyclic Subgraph, Sparsest Cut and other problems. Our constructions show gaps for Sherali--Adams relaxations that survive n^\delta rounds of lift and project. For MAX CUT, Vertex Cover, and Maximum Acyclic Subgraph, these show that even n^\delta rounds of Sherali--Adams do not yield a better than 2-epsilon approximation.
The main combinatorial challenge in constructing these gap examples is the construction of a fractional solution that is far from an integer solution, but yet admits consistent distributions of local solutions for all small subsets of variables. Satisfying this consistency requirement is one of the major hurdles to constructing Sherali-Adams gap examples. We present a modular recipe for achieving this, building on previous work on metrics with a local-global structure. We develop a conceptually simple geometric approach to constructing Sherali--Adams gap examples via constructions of consistent local SDP solutions.
This geometric approach is surprisingly versatile. We construct Sherali-Adams gap examples for Unique Games based on our construction for MAX CUT together with a parallel repetition like procedure. This in turn allows us to obtain Sherali-Adams gap examples for any problem that has a Unique Games based hardness result (with some additional conditions on the reduction from Unique Games).
Joint work with Moses Charikar and Konstantin Makarychev.

On Broadcast Scheduling
Samir Khuller

Broadcasting over a wireless channel is a standard mechanism to disseminate information to a set of clients. Clients request different pieces of information, called "pages", and in this "pull-based" model, the server responds with a broadcast schedule. The key property that distinguishes this problem from standard scheduling is that multiple requests may be satisfied by a single broadcast of the requested page. Surprisingly, this small change makes almost all the problems in this area NP-hard whereas without this property these problems can be solved optimally in polynomial time for unit sized pages. This overlap property arises in other contexts as well such as in multi-query processing.
We consider a variety of different objective functions in our work minimizing the sum of response times, minimizing the maximum response time and maximizing the number of satisfied requests when requests have deadlines. This is a survey talk based on several papers and will cover a collection of results using a variety of techniques. Finally we propose some open problems. No background knowledge beyond undergraduate algorithms is expected.

The Primal-Dual method in Online Computation
Seffi Naor

A unified approach, based on the primal-dual method, is discussed for a wide range of online covering and packing problems, having various objective functions. This approach has lead to a simple alternative view and analysis of many previously suggested algorithms, as well as new results. In particular, a randomized O(log k)-competitive online algorithm for the weighted paging problem, where there is a (page dependent) weight for fetching each page into the cache, and k is the cache size. The focus of the talk will be on developing the general methodology and the special role of complementary slackness therein.
Based on papers with Nikhil Bansal, Niv Buchbinder and Kamal Jain.


A key primitive for Internet ad auctions: Budgeted Allocation
Gagan Goel

Internet ad auctions have inspired new allocation problems, computational study of which has resulted in not only new insights for this multi-billion dollar industry, but has also added to the theory of computation itself. One such problem is that of Budgeted Allocation: We are given a set of m indivisible items and n agents; each agent i is willing to pay b_ij for the item j and has a maximum overall budget of B_i. The goal is to allocate items to the agents so as to maximize the total revenue.
We study the approximability of the budgeted allocation problem. We give a 3/4-approximation algorithm and a hardness factor of 15/16. Utilizing our hardness technique, we also improve the hardness factor of several other well studied problems: Submodular Welfare Maximization in demand query model, Generalized Assignment Problem, and Maximum Spanning Star Forest.
This is a joint work with Deeparnab Chakrabarty.

Orienteering and variants: a mini-survey and open problems
Chandra Chekuri

In the Orienteering problem we are given a graph (undirected or directed), two vertices s and t, and a budget B. The goal is to find an s-t walk of length at most B that visits the maximum number of distinct vertices. Non-trivial approximation algorithms for this problem, its generalizations, and variants have been developed only over the last five years. Several interesting open problems remain. The talk will survey some of the main results and ideas in this area with the aim of highlighting the open problems.

Learning Approximately Sparse Cuts in Graphs Quickly
Nisheeth Vishnoi

A fundamental problem in computing is to partition a graph into two "roughly" equal parts so as to minimize the number of edges going crossing the partition. Interest in this problem derives both from its numerous practical applications such as image segmentation, VLSI layout and clustering, and from its theoretical connections to a diverse set of technical areas such as spectral methods, linear/semi-definite programming, measure concentration and metric embeddings.
From an applied point of view, the main challenge in the domain of graph partitioning is to design good quality, yet scalable algorithms. The reason being that, in practice, typical inputs to this problem consist of very large graphs, and hence, it is imperative to find algorithms that not only run very fast but provide a guarantee about the quality of the cut they produce. The first major step towards this was taken by Khandekar, Rao and Vazirani (KRV). They reduced the graph partitioning (sparsest cut) problem to the computation of a ``small'' number of max-flows and yet achieved an approximation factor of O(log ^2 n).
In this talk I will present an abstraction of the KRV framework which allows us to set the problem of finding a sparse cut in an online learning setting and then, motivated by techniques from therein, I will present an algorithm which achieves an O(log n) approximation factor and runs in essentially max-flow time.
Complementing the algorithmic results, I will show that in this framework of reducing the graph partitioning problem to max-flow, one cannot get an approximation factor better than sqrt(log n).