Close Menu

From Weak to Strong LP Gaps for All CSPs

When: 

Aug 28, 2017 - 1:50pm to 2:55pm

Where: 

TBD

Speaker: 

Madhur Tulsiani
Toyota Technological Institute at Chicago
Department of Computer Science, University of Chicago

Description: 

We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by \( \Omega (\ln n/ \ln\ln n) \) levels of the of the Sherali-Adams hierarchy on instances of size n. It was proved by Chan et al. (2013) that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than the basic LP, which can be thought of as the base level of the Sherali-Adams hierarchy. This essentially gives a dichotomy result for approximation of CSPs by polynomial size LP extended formulations.

Using our techniques, we also simplify and strengthen the result by Khot et al. (2014) on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which \( \Omega (\ln\ln n) \) levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to \( \Omega (\ln n / \ln\ln n) \) levels. Joint work with Mrinalkanti Ghosh.

 

Event Type: 

Department of Applied Mathematics - Colloquia