Mysore Park Workshops

Workshops



Recent Advances in Algorithms and Complexity

The workshop on Recent advances in Algorithms and Complexity was held during October 21-24, 2010 at the Mysore Infosys campus. There were around 60 participants from various academic institutes and research labs in India. The aim of this workshop was to bring together researchers and students working in algorithms and complexity in India in order to discuss recent developments in this area.

The main talks held here were the following:

  • Manindra Agrawal on A recent attempt at proving P ≠NP

  • Naveen Garg on A new breakthrough result on Steiner tree approximation

  • Navin Goyal and T. S. Jayram on Recent advances in communication complexity

  • Nisheeth Vishnoi on A new faster approximation for max flow

  • Venkatesh Raman's survey talk on Subexponential parameterized algorithms

  • Satya Lokam on Matrix norms and communication complexity

    A recent attempt at proving P≠NP

    The first talk of the workshop was by Manindra Agrawal on the recent attempt at proving P≠NP by Vinay Deolalikar. This attempt received a considerable amount of attention due to various blogs, and several researchers got together to understand the arguments in the paper. The proof was however invalidated, but Manindra gave a summary of the events that took place and also some details in the proof.

    The main idea in the proof was to use some tools in statistical mechanics to study the structure of solution spaces for various NP-complete problems. Specifically, the author considers 9CNF-SAT and uses some logical characterizations of P to show that the solution space is a bit too bizarre for a polynomial time algorithm to produce it.

    There were various flaws in this paper as pointed out by several researchers, Neil Immerman in particular. Manindra pointed out a few of them in the talk. The author later submitted a revised version but the proofs were still very hairy and unsatisfactory. One claim that talks about "parametrizability of distributions", which is perhaps the most important part of the proof, has no proof at all and is just a vague argument. Manindra then pointed out that this claim is just a reformulation of "P / Poly", and hence there is little that this proof adds to our knowledge.

    A new breakthrough result on Steiner tree approximation.

    Naveen Garg presented an improved LP based approximation for Steiner trees by Byrka, Grandoni, Rothvoss, and Sanita which won a best paper award at STOC 2010. Given an n-node graph G with edge costs and a subset of vertices called terminals, the Steiner tree problem asks for the least cost tree in G that connects all the terminals. It is known that this problem is NP-hard to approximate to within 96/95. The previous best approximation was ≈ 1:55 + ε by Robin and Zelikovsky. The new algorithm is a 1.39 factor approximation for this problem.

    This algorithm works on full components of a Steiner tree: this is a maximal subtree whose terminals coincide with its leaves. More precisely, it works with k-restricted Steiner trees where every full component has at most k terminals. Then a linear program for k-directed cut relaxation is used whose variables correspond to all the k-components. This LP can be solved in polynomial time using the ellipsoid method. The new algorithm calls this LP in order to get the optimal fractional solution which gives probabilities of sampling components for contraction. Once a component is contracted, we have a new graph and we solve the LP corresponding to this graph in the next iteration and so on.

    In conclusion, there are two cost components of the final solution. By balancing these two costs gives a 1:5 factor approximation algorithm. The further improvement to 1:39 uses more sophisticated analysis.

    Recent advances in communication complexity.

    Navin Goyal and T. S. Jayram gave talks on recent advances in communication complexity. Communication complexity is a field now central to complexity theory with connections to areas such as boolean circuits, VLSI circuit design, data stream algorithms and pseudorandomness, among others. The talks looked at two recent results in the field. Both problems discussed were in the framework of the two-party model, where two parties (say, Alice and Bob) have to compute a function f(x, y), but each only holds a part of the input: Alice has x and Bob has y, and they need to communicate to be able to perform the computation. The common idea in the two results was to use information theoretic techniques to obtain bounds on the amount of communication required.

    Navin Goyal's Talk: The direct sum problem in communication complexity is to find out how much communication is required between Alice and Bob to compute f^n(x[1],…,x[n], y[1],…,y[n]) = (f(x[1, ]y[1]), … , f(x[n][, ]y[n])), if the communication complexity D(f) for computing f is C. We can see that D(f^n) ≤ nD(f) since each co-ordinate can be treated independently. The result, due to Barak, Braverman, Chen and Rao (from STOC 2010) concludes (roughly) that the randomized communication complexity of f^n , R(fn) ≥ W( R(f)) (hiding logarithmic factors in n). They conclude this bound by means of a protocol compression mechanism.

    First, using Yao's min-max theorem, we can look at a distribution m over the inputs and the error of the protocol on this distribution. By defining an appropriate measure of the information content in the protocol p (which specifies the transcript on the inputs, and public randomness): IC[m] (p) = I(X;p|Y)+I(Y;p|X), the compression mechanism turns a protocol p with low information content into a protocol t with low communication comple) where C(.) denotes the communication complexity of the protocol at hand). Using this, we can translate a protocol for f^n to a protocol for f (invoking the above mentioned protocol compression), whose communication complexity can be lower-bounded.

    T.S. Jayram's Talk: This talk also focused on the direct-sum problem, using a slightly different measure of information cost. The aim was to prove a direct-sum theorem for functions that can be expressed as the logical OR of some n primitive functions, and illustrate the approach using the example of the L∞ distance problem. More precisely, let f( ) = g(x[i, ]y[i])) (the domains need not be boolean or discrete, but the range is {0, 1} . Suppose there is a distribution m over the domain X × Y. The distribution is partitioned by η in a joint probability space (X, Y, F)if F ~ η, and Pr[X; Y | F] = Pr[X|F] Pr[Y |F]. If R denotes the public randomness available to the players, and (X, Y, R) is the transcript of the protocol P in use by the players, then the conditional information cost of P is defined as I(X; Y : | F). The information complexity IC[m](f | η) is the minimum information cost of a correct protocol for f under (m,η). One can show that R(f) ≥ IC[m](f | η). The direct-sum theorem (BarYosef, Jayram, Kumar and Sivakumar, 2002 and Chakrabarti, Shi, Wirth, and Yao, 2001) will give us that IC[m]^n(f|η^n) ≥ IC[m](g | η), if sup(m) g^-1(0)Thus, we can lower bound the communication complexity of f by lower bounding the information complexity of primitive functions. In the case of the L∞ distance problem, a suitable m, η are chosen, and properties of the Hellinger distance are used to get at the lower bounds R(L∞) = W(n). This can be generalized to the t-player model.

    A new faster approximation for max flow

    Nisheeth Vishnoi gave a talk on "Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs", which is a very recent paper by Christiano, Kelner, Madry, Spielman and Teng on faster approximation algorithms for max flow in undirected graphs. This paper shows an algorithm with running time O[I](m^4/3 ) to obtain a 1 + I approximation factor for max flow and an algorithm with running time O[I]( m+n^4/3 ) to obtain a 1+ I approximation factor for s-t min cut in a graph with m edges and n vertices. The previous best max flow algorithm (Goldberg and Rao, 1997) had a running time of O[I](m^3/2 ) (ignoring the dependence on edge capacities).

    The main ingredients in the new algorithm are the use of electrical flows and the near linear time algorithm of Spielman and Teng for solving symmetric, diagonally-dominant linear systems approximately. An electrical flow is one that minimizes the energy of an electrical network. However an electric flow violates the edge capacities of the network. The new algorithm iteratively computes an appropriate electrical ow (using the Spielman-Teng linear system solver) in the network with suitable electrical resistances and these resistances get updated for the next iteration based on the edge congestions in the current iteration. The sum (roughly) of all these electrical flows over O[I](m^1/2 ) iterations is the desired approximate maxflow. This takes O[I](m^3/2 ) time. The running time can be further improved to O[I](m^4/3 ).

    A survey talk on subexponential parameterized algorithms.

    Venkatesh Raman gave a survey talk on subexponential parameterized algorithms. It is not completely known for what problems and in what graph classes subexponenetial algorithms exist. The planar separator theorem can be used to find a maximum independent set of a planar graph in time. The same approach can be used for planar dominating set, vertex cover etc. One goal is to check in time whether G has a solution of size at most k. The four-color theorem implies that planar independent set has linear kernel. So a running time of implies a running time of for the parameterized instance. Planar dominating set, vertex cover, feedback vertex set all have linear kernels. From the monotonicity of k-paths and the Grid theorem, a algorithm can be obtained. Subexponential algorithms for minor and contraction bidimensional parameters can be extended to non bidimensional parameters like partial dominating set or partial vertex cover as well. Color coding can be used to find an algorithm of run time + O(n) for feedback arc set on tournaments. Some major open problems in this field are subexponential algorithms for directed k-paths, Steiner trees on planar graphs and better algorithms for feedback vertex set in general directed graphs.

    Matrix norms and communication complexity.

    Satya Lokam spoke on matrix norms and communication complexity. He reviewed the analytical methods of Sherstov, Linial, Shraibman to show the strict containment of the following classes: P^CC ⊂ BPP^CC ⊂ PP^CC ⊂ UPP^CC.