编辑: lqwzrs | 2019-07-16 |
2 1 In this paper log denotes log2.
1 1.2 Approximation by DNF DNF formulas are one of the simplest and most natural representation classes for boolean functions. Although every function can be computed by a DNF, some functions on n bits may require DNFs of size ?(2n). The natural question we pursue in this paper is whether this size can be signi?cantly reduced for a given function if we are only required to -approximate it, for some small constant . Positive results along these lines would have interesting applications in several research areas, including computational learning theory and the the study of threshold phenomena in random graphs;
these topics will be discussed in Sections 1.3 and 1.4, respectively. However there do not seem to be many results on either upper or lower bounds for approximation by DNF in the literature. A notable conjecture in this area was made
8 years ago by Benjamini, Kalai, and Schramm [BKS99] (published again in [Kal00, KS05]). To describe this conjecture, which we call the BKS Conjecture, we need to recall the notion of total in?uence [KKL88, LMN93]: De?nition 1.2 Given a function f : {0, 1}n → {0, 1}, the in?uence of the ith coordinate on f is Infi(f) = Pr x [f(x) = f(σix)], where σix denotes x with its ith bit ?ipped. The total in?uence (or average sensitivity) of f is I(f) = n i=1 Infi(f) = E x [#{y ? x : f(y) = f(x)}] , where the notation y ? x means that the Hamming distance between y and x is 1. The total in?uence is an important measure of the complexity of a function, used frequently in learning theory, threshold phenomena, and complexity theory. One important result to note is that constant-depth circuits of small size have small total in?uence: Theorem 1.3 Let f : {0, 1}n → {0, 1} be computed by a circuit of depth d and size s. Then I(f) ≤ O(logd?1 s). This was ?rst proved by Boppana [Bop97], tightening an argument made by Linial, Mansour, and Nisan [LMN93] based on H? astad'
s Switching Lemma [H? as86]. Note that the d =
2 case of this theorem is quite easy, building on the simple result that I(f) ≤ 2w for any f computable by a DNF of width w. We can now state Benjamini, Kalai, and Schramm'
s conjecture, which essentially asserts a converse to Theorem 1.3 for monotone functions: BKS Conjecture: For every >
0 there is a constant K = K( ) <
∞ such that the following holds: Every monotone f : {0, 1}n → {0, 1} can be -approximated by a depth-d circuit of size at most exp (K ・ I(f))1/(d?1) , for some d ≥ 2. (Recall that f is monotone if x ≤ y ? f(x) ≤ f(y).) Observation 1.1 implies that the BKS Con- jecture could also add the condition that width is at most (K・I(f))1/(d?1) without loss of generality. If this conjecture were true ........