编辑: Mckel0ve 2019-01-08
Learning random monotone DNF Je?rey C.

Jackson ? Duquesne University Pittsburgh, PA

15282 [email protected] Homin K. Lee Columbia University New York, NY

10027 [email protected] Rocco A. Servedio? Columbia University New York, NY

10027 [email protected] Andrew Wan Columbia University New York, NY

10027 [email protected] May 22,

2008 Abstract We give an algorithm that with high probability properly learns random monotone t(n)-term DNF under the uniform distribution on the Boolean cube {0, 1}n . For any polynomially bounded function t(n) ≤ poly(n) the algorithm runs in time poly(n, 1/?) and with high probability outputs an ?-accurate monotone DNF hypothesis. This is the ?rst algorithm that can learn monotone DNF of arbitrary polynomial size in a reasonable average-case model of learning from random examples only. ? Supported in part by NSF award CCF-0209064 ? Supported in part by NSF award CCF-0347282, by NSF award CCF-0523664, and by a Sloan Foundation Fel- lowship.

1 Introduction 1.1 Motivation and background. Any Boolean function f : {0, 1}n → {0, 1} can be expressed as a disjunction of conjunctions of Boolean literals, i.e. as an OR of ANDs. Such a logical formula is said to be a disjunctive normal formula, or DNF. Learning polynomial-size DNF formulas (dis- junctions of poly(n) many conjunctions) from random examples is an outstanding open question in computational learning theory, dating back more than

20 years to Valiant'

s introduction of the PAC (Probably Approximately Correct) learning model [Val84]. The most intensively studied variant of the DNF learning problem is PAC learning DNF under the uniform distribution. In this problem the learner must generate a high-accuracy hypothesis with high probability when given uniform random examples labeled according to the unknown target DNF. (See Section

2 for a detailed de?nition of uniform distribution learning.) Despite much e?ort, no polynomial-time algorithms are known for this problem. A tantalizing question that has been posed as a goal by many authors (see e.g. [Jac97, JT97, BBL98, Blu03b, Ser04]) is to learn monotone DNF, which only contain unnegated Boolean variables, under the uniform distribution. Besides being a natural restriction of the uniform distribution DNF learning problem, this problem is interesting because several impediments to learning general DNF under uniform C known lower bounds for Statistical Query based algorithms [BFJ+94], the apparent hardness of learning the subclass of log(n)-juntas [Blu03a] C do not apply in the monotone case. This paper solves a natural average-case version of this problem. 1.2 Previous work. Many partial results have been obtained on learning monotone DNF under the uniform distribution. Verbeurgt [Ver90] gave an nO(log n)-time uniform distribution algorithm for learning any poly(n)-term DNF, monotone or not. Several authors [KMSP94, SM00, BT96] have given results on learning monotone t-term DNF for larger and larger values of t;

most recently, [Ser04] gave a uniform distribution algorithm that learns any 2O( √ log n)-term monotone DNF to any constant accuracy ? = Θ(1) in poly(n) time. O'

Donnell and Servedio [OS06] have recently shown that poly(n)-leaf decision trees that compute monotone functions (a subclass of poly(n)- term monotone DNF) can be learned to any constant accuracy under uniform in poly(n) time. Various other problems related to learning di?erent types of monotone functions under uniform have also been studied, see e.g. [KLV94, BBL98, Ver98, HM91, AM02]. Aizenstein and Pitt [AP95] ?rst proposed a model of random DNF formulas and gave an exact learning algorithm that learns random DNFs generated in this way. As noted in [AP95] and [JS06], this model admits a trivial learning algorithm in the uniform PAC setting. Jackson and Servedio [JS05] gave a uniform distribution algorithm that learns log-depth decision trees on average in a natural random model. Previous work on average-case uniform PAC DNF learning, also by Jackson and Servedio, is described below. 1.3 Our results. The main result of this paper is a polynomial-time algorithm that can learn random poly(n)-term monotone DNF with high probability. (We give a full description of the exact probability distribution de?ning our random DNFs in Section 5;

下载(注:源文件不在本站服务器,都将跳转到源网站下载)
备用下载
发帖评论
相关话题
发布一个新话题