說明分析machine learning2019s課件attack v_第1頁
說明分析machine learning2019s課件attack v_第2頁
說明分析machine learning2019s課件attack v_第3頁
說明分析machine learning2019s課件attack v_第4頁
說明分析machine learning2019s課件attack v_第5頁
已閱讀5頁,還剩32頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、Attack and DefenseAttack and DefenseHung-yi LeeSource of image: Motivation We seek to deploy machine learning classifiers not only in the labs, but also in real world. The classifiers that are robust to noises and work “most of the time” is not sufficient. We want the classifiers that are robust the

2、 inputs that are built to fool the classifier. Especially useful for spam classification, malware detection, network intrusion detection, etc. 光是強(qiáng)還不夠應(yīng)付來自人類的惡意Attack medieval-weapon-1352-3.jpgWhat do we want to do?Something ElseTiger Cat0.64NetworkOriginal ImageAttacked ImagecloseLoss Function for At

3、tack Training: Non-targeted Attack: Targeted Attack: Constraint: Network不要被發(fā)現(xiàn)closefarclosee.g. cate.g. fish fixed fixed?Constraint L2-norm L-infinity Change one pixel muchChange every pixel a little bitsame L2 small L- large L- How to Attack Gradient Descent (Modified Version)Just like training a ne

4、ural network, but network parameter is replaced with input For t = 1 to T Start from original image If How to Attack Gradient Descent (Modified Version)Just like training a neural network, but network parameter is replaced with input def For all fulfill Return the one closest to For t = 1 to T Start

5、 from original image If How to Attack def For all fulfill Return the one closest to L2-norm L-infinityExample Tiger Cat0.64Original ImageAttacked ImageStar Fish1.00ResNet-50True = Tiger catFalse = Star FishExample Tiger Cat0.64Original ImageAttacked ImageStar Fish1.00=-50 xExample Tiger Cat0.64Origi

6、nal ImageAttacked ImageKeyboard0.98ResNet-50True = Tiger catFalse = Keyboardtabby catPersiancatfire screen tiger catWhat happened?Random Specific DirectionAttack Approaches FGSM ( ) Basic iterative method ( ) L-BFGS ( ) Deepfool ( ) JSMA ( ) C&W ( ) Elastic net attack ( ) Spatially Transformed (

7、 ) One Pixel Attack ( ) only list a fewAttack Approaches Fast Gradient Sign Method (FGSM)Different optimization methodsDifferent constraintsonly have +1 or -1 Attack Approaches Fast Gradient Sign Method (FGSM)Different optimization methodsDifferent constraints gradientonly have +1 or -1L-infinityAtt

8、ack Approaches Fast Gradient Sign Method (FGSM)Different optimization methodsDifferent constraints gradientonly have +1 or -1L-infinityvery large learning rateWhite Box v.s. Black Box In the previous attack, we fix network parameters to find optimal . To attack, we need to know network parameters Th

9、is is called White Box Attack. Are we safe if we do not release model? You cannot obtain model parameters in most on-line API. No, because Black Box Attack is possible. Black Box AttackIf you have the training data of the target networkTrain a proxy network yourselfUsing the proxy network to generat

10、e attacked objectsOtherwise, obtaining input-output pairs from target network NetworkBlackNetworkProxyTraining DataAttacked ObjectBlack Box AttackIf you have the training data of the target networkTrain a proxy network yourselfUsing the proxy network to generate attacked objectsOtherwise, obtaining

11、input-output pairs from target network BlackProxyUniversal Adversarial Attack Black Box Attack is also possible!Adversarial Reprogramming Gamaleldin F. Elsayed,Ian Goodfellow,Jascha Sohl-Dickstein, “Adversarial Reprogramming of Neural Networks”, ICLR, 2019Attack in the Real World Black Box AttackAtt

12、ack in the Real World rs/face-rec-ccs16.pdf1. An attacker would need to find perturbations that generalize beyond a single image.2. Extreme differences between adjacent pixels in the perturbation are unlikely to be accurately captured by cameras. 3. It is desirable to craft perturbations that are co

13、mprised mostly of colors reproducible by the printer. s/1707.08945Beyond Images You can attack audio You can attack text 07.07328.pdfDefenseDefense Defense Adversarial Attack cannot be defended by weight regularization, dropout and model ensemble. Two types of defense: Passive defense: Finding the a

14、ttached image without modifying the model Special case of Anomaly Detection Proactive defense: Training a model that is robust to adversarial attackPassive Defense KeyboardNetworkFiltere.g. Smoothing Attack signalOriginal Do not influence classificationLess harmfulTiger Cattiger cat0.64Smoothing tig

15、er cat0.45Keyboard0.98Smoothing tiger cat0.37Passive Defense Feature Squeeze s/1704.01155Randomization at Inference Phase s/1711.01991Proactive Defense 精神:找出漏洞、補(bǔ)起來For t = 1 to T Given training data Find adversarial input given by an attack algorithm Using both to update your model For n = 1 to N We have new training data Data Augmentationdifferent in each iteration找出漏洞Using to train your model 把洞補(bǔ)起來Using algorithm AThis method would stop algorithm A, but is still vulnerable for algorithm B. Concluding Remarks Attack: given the network parameters

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論