機器學習第6章作業(yè)三.doc_第1頁
機器學習第6章作業(yè)三.doc_第2頁
機器學習第6章作業(yè)三.doc_第3頁
機器學習第6章作業(yè)三.doc_第4頁
機器學習第6章作業(yè)三.doc_第5頁
已閱讀5頁,還剩4頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領

文檔簡介

1.1 機器學習:人臉識別、手寫識別、信用卡審批。 不是機器學習:計算工資,執(zhí)行查詢的數(shù)據(jù)庫,使用WORD。2.1 Since all occurrence of “” for an attribute of the hypothesis results in a hypothesis which does not accept any instance, all these hypotheses are equal to that one where attribute is “”. So the number of hypothesis is 4*3*3*3*3*3 +1 = 973.With the addition attribute Watercurrent, the number of instances = 3*2*2*2*2*2*3 = 288, the number of hypothesis = 4*3*3*3*3*3*4 +1 = 3889.Generally, the number of hypothesis = 4*3*3*3*3*3*(k+1)+1.2.3 Ans.S0= (,) v (,)G0 = (?, ?, ?, ?, ?, ?) v (?, ?, ?, ?, ?, ?)Example 1: S1=(Sunny, Warm, Normal, Strong, Warm, Same) v (,)G1 = (?, ?, ?, ?, ?, ?) v (?, ?, ?, ?, ?, ?)Example 2: S2= (Sunny, Warm, Normal, Strong, Warm, Same) v (Sunny, Warm, High, Strong, Warm, Same),(Sunny, Warm, ?, Strong, Warm, Same) v (,)G2 = (?, ?, ?, ?, ?, ?) v (?, ?, ?, ?, ?, ?)Example 3: S3=(Sunny, Warm, Normal, Strong, Warm, Same) v (Sunny, Warm, High, Strong, Warm, Same),(Sunny, Warm, ?, Strong, Warm, Same) v (,)G3 = (Sunny, ?, ?, ?, ?, ?) v (?, Warm, ?, ?, ?, ?),(Sunny, ?, ?, ?, ?, ?) v (?, ?, ?, ?, ?, Same),(?, Warm, ?, ?, ?, ?) v (?, ?, ?, ?, ?, Same)Example 4: S4= (Sunny, Warm, ?, Strong, ?, ?) v (Sunny, Warm, High, Strong, Warm, Same),(Sunny, Warm, Normal, Strong, Warm, Same) v (Sunny, Warm, High, Strong, ?, ?),(Sunny, Warm, ?, Strong, ?, ?) v (,),(Sunny, Warm, ?, Strong, Warm, Same) v (Sunny, Warm, High, Strong, Cool, Change)G4 =(Sunny, ?, ?, ?, ?, ?) v (?, Warm, ?, ?, ?, ?),(Sunny, ?, ?, ?, ?, ?) v (?, ?, ?, ?, ?, Same),(?, Warm, ?, ?, ?, ?) v (?, ?, ?, ?, ?, Same)2.4 Ans. (a) S= (4,6,3,5) (b) G=(3,8,2,7) (c) e.g., (7,6), (5,4) (d) 4 points: (3,2,+), (5,9,+),(2,1,-),(6,10,-)2.6 Proof: Every member of VSH,D satisfies the right-hand side of expression.Let h be an arbitrary member of VSH,D, then h is consistent with all training examples in D.Assuming h does not satisfy the right-hand side of the expression, it means (sS)(gG)(g h s) = (sS)(gG) (g h) (h s). Hence, there does not exist g from G so that g is more general or equal to h or there does not exist s from S so that h is more general or equal to s. If the former holds, it leads to an inconsistence according to the definition of G. If the later holds, itleads to an inconsistence according to the definition of S. Therefore, h satisfies the right-hand side ofthe expression. (Notes: since we assume the expression is not fulfilled, this can be only be if Sor G is empty, which can only be in the case of any inconsistent training examples, such as noiseor the concept target is not member of H.)貝葉斯:6.1 由題意可得,兩次對病人做化驗測試都為正時,cancer和cancer的后驗概率分別可表示為:P(canner|+,+),P(cancer|+,+)。最后一個等號是因為假定兩個測試是相互獨立的,即:P(+,+|cancer)=P(+|cancer)P(+|cancer)同理可得:P(+|cancer) P(+|cancer) P(cancer)=0.98*0.98*0.008=0.0076832P(+|cancer) P(+|cancer) P(cancer)=0.03*0.03*0.992=0.0008928P(+,+) = P(+,+|cancer) P(cancer) + P(+,+|cancer)P(cancer)=0.0076832+0.0008928=0.008576所以:P(canner|+,+)0.0076832/0.008576=0.895896P(cancer|+,+)=0.1041046.2 由貝葉斯公式可知:因為事件cancer與cancer互斥,且P(cancer)+P(cancer)=1,有全概率公式可得: P(+) = P(+|cancer) P(cancer) + P(+|cancer)P(cancer)故所以中的歸一化方法正確。6.3 (a) P(h): 如果假設h1比h2更一般時,賦予P(h1)=P(h2) (b) P(h): 如果假設h1比h2更一般時,賦予P(h1)P(hj) P(D|h)= 6.5 在樸素貝葉斯分類中,在給定目標值V時,屬性之間相互獨立,其貝葉斯網(wǎng)如下所示,箭頭方向為從上到下。因為屬性wind與其它屬性獨立,沒有與其相關(guān)聯(lián)的屬性。機器學習1在測試一假設h時,發(fā)現(xiàn)在一包含n=1000個隨機抽取樣例的樣本s上,它出現(xiàn)r=300個錯誤。Errors(h)的標準差是什么?將此結(jié)果與第5.3.4節(jié)末尾的例子中標準差相比會得出什么結(jié)論? 由題意知errors(h)=r/n=300/1000=0.3,由于r是二項分布,它的方差為np(1-p),然而p未知,用r/p代替p得出r的估計方差為1000*0.3*(1-0.3)=210,相應的標準差為sqrt(210)=14.5,這表示errors(h)=r/n中的標準差為14.5/1000=0.0145,由此得出以下結(jié)論:一般來說,若在n個隨機選取的樣本中有r個錯誤,errors(h)的標準差為sqrt(p(1-p)/n),它約等于用r/n= errors(h)來代替p. 2、如果沒有更多的信息對真實錯誤率的評估也就是樣本錯誤率, 則真實錯誤率的標準差為:17/100=0.17 由95%的置信區(qū)間公式: 帶入數(shù)字得95%的置信區(qū)間為:0.17 +(1.96 X 0.04).3.如果假設h在n=65的獨立抽取樣本上出現(xiàn)r=10個錯誤,真實的錯誤率的90%的置信區(qū)間(雙側(cè)的)是多少?95%單側(cè)置信區(qū)間(即一個上界U,使得有95%置信區(qū)間errorD(h)U)是多少?90%單側(cè)區(qū)間是多少?解:樣本數(shù)為:n = 65,假設h在n個樣本上所犯的錯誤為r = 10,所以樣本錯誤率為errorS(h) = = = 。于是:errorD(h)的N%的置信區(qū)間為: 當N = 90時,查表5-1得:zN = 1.64,可得真實錯誤率的90%的置信區(qū)間為: = 0.160.07395%的單側(cè)置信區(qū)間為errorD(h)U,其中90%的單側(cè)置信區(qū)間為:errorD(h) U,其中(zN為置信度為80%的置信度時的值1.28)。4.要測試一假設h,其errorD(h)已知在0.2到0.6的范圍內(nèi),要保證95%雙側(cè)置信區(qū)間的寬度小于0.1,最小應搜集的樣例數(shù)是多少?解:若使95%雙側(cè)置信區(qū)間的寬度小于0.1,則: (其中zN = 1.96),上式中因此最少應搜集的樣例數(shù)為3015.5 對隨即變量 ,為待估參數(shù),服從N(0,1) 分布,均值為d,方差為其中:erorD(h1)-errorD(h2)單側(cè)置信區(qū)間下限:d-zNs,+)同理可求單側(cè)置信區(qū)間上限:(-,d+ zNs,把s代入即可.5.6 首先,先回顧一下抽樣樣本的數(shù)字特征, 設為總體的

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論