科技無(wú)人機(jī)6openmv及ide安裝包例程解析_第1頁(yè)
科技無(wú)人機(jī)6openmv及ide安裝包例程解析_第2頁(yè)
科技無(wú)人機(jī)6openmv及ide安裝包例程解析_第3頁(yè)
科技無(wú)人機(jī)6openmv及ide安裝包例程解析_第4頁(yè)
科技無(wú)人機(jī)6openmv及ide安裝包例程解析_第5頁(yè)
已閱讀5頁(yè),還剩55頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

wld你好,世界! oWorldExample etotheOpenMVIDE!Clickonthestartbuttonbelowtorunthescript!#用usb線與電腦連接后,打開文件——examples——01Basic—— importsensor,image,sensor.resetInitializethecamerasensor.sensor.set_pixformat(sensor.RGB565orsensor.GRAYSCALEsensor.set_framesize(sensor.QVGA)#orsensor.QQVGA(or#設(shè)置圖像像素大小,sensor.QQVGA160x120,sensor.QQVGA2128x160一般用#擴(kuò)展板),sensor.QVGA320x240,sensor.QQCIF88x72,sensor.QCIF:176x144,sensor.CIF:352x288sensor.skip_frames(10)#Letnewsettingstakeaffect.clock=time.clock()#TracksFPS.#pythonwhile循環(huán),一定記加冒號(hào)clock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturntheimage.print(clock.fps())#Note:YourOpenMVCamrunsabouthalffast#connectedtoyourcomputer.TheFPSshouldincreaseonce01-ma.py讓LED#MainModuleExample#WhenyourOpenMVCamisdisconnectedfromyourcomputeritwilleitherrunthe#main.pyscriptontheSDcard(ifattached)orthemain.pyscript#yourOpenMVCam'sinternalflashimporttime,led=pyb.LED(3)#RedLED=1,GreenLED=2,BlueLED=3,IR=usb=pyb.USB_VCP()#Thisisaserialportobjectthatallowsyou#communciatewithyourcomputer.Whileitisnotopenthecodebelowwhile(notusb.isconnected()): #暗燈led=pyb.LED(2)#Switchtousingthegreen03-colr_rawing#ColorDrawingExample#ThisexampleshowsoffyourOpenMVCam'sbuilt-indrawingcapabilities.This#examplewasoriginallyatestbutservesasgoodreferencePlease#yourIDEintonon-JPEGmodetoseethebestdrawingimportsensorimagetime引入程序依賴的模塊#Alldrawingfunctionsusethesamecodetopasscolor.#Sowejustneedtotestonefunction.#TestDrawLine(GRAYSCALE)foriinrange(10):#循環(huán)變量i從0到9,共計(jì)10個(gè)數(shù)循環(huán)foriinrange(img.width()):#img.width=320(即QVGA圖像格式的寬度),c=((i*255)+(img.width()/2))/img.width()img.draw_line((i0iimg.height()-1colorint(c))#畫線函數(shù)img.draw_line((x0,y0,x1,y1),color=White),從#255代表白;如果是RGB圖像,color是(r,g,b)的一個(gè)元組,rgb分別代表紅

#TestDrawLine(RGB565)foriinrange(10):img=sensor.snapshot()foriinrange(img.width()):c=((i*255)+(img.width()/2))/img.draw_line((i,0,i,img.height()-1),color=[int(c),#因?yàn)槭莚gb圖,所以color[int(c)00],而不是一個(gè)單純的0-255的數(shù)#注意python的元組類型一定要用[]或者()括起來(lái),否則會(huì)報(bào)錯(cuò)。#TestDrawLine(RGB565)foriinrange(10):img=sensor.snapshot()foriinrange(img.width()):c=((i*255)+(img.width()/2))/img.draw_line([i,0,i,img.height()-1],color=[0,#TestDrawLine(RGB565)foriinrange(10):img=sensor.snapshot()foriinrange(img.width()):c=((i*255)+(img.width()/2))/img.draw_line([i,0,i,img.height()-1],color=[0,0,03- 03-Drawing-crazy_drawing.pyopenmv圖像中畫直線、矩形、圓、#ColorDrawingExample#ThisexampleshowsoffyourOpenMVCam'sbuilt-indrawingcapabilities.This#examplewasoriginallyatestbutservesasgoodreferencePlease#yourIDEintonon-JPEGmodetoseethebestdrawingimportsensorimagetime引入程序依賴的模塊#Alldrawingfunctionsusethesamecodetopasscolor.#Sowejustneedtotestonefunction.#TestDrawLine(GRAYSCALE)foriinrange(10):#循環(huán)變量i從0到9,共計(jì)10個(gè)數(shù)循環(huán)foriinrange(img.width()):#img.width=320(即QVGA圖像格式的寬度),c=((i*255)+(img.width()/2))/img.width()img.draw_line((i0iimg.height()-1colorint(c))#畫線函數(shù)img.draw_line((x0,y0,x1,y1),color=White),從#255代表白;如果是RGB圖像,color是(r,g,b)的一個(gè)元組,rgb分別代表紅

#TestDrawLine(RGB565)foriinrange(10):img=sensor.snapshot()foriinrange(img.width()):c=((i*255)+(img.width()/2))/img.draw_line((i,0,i,img.height()-1),color=[int(c),#因?yàn)槭莚gb圖,所以color[int(c)00],而不是一個(gè)單純的0-255的數(shù)#注意python的元組類型一定要用[]或者()括起來(lái),否則會(huì)報(bào)錯(cuò)。#TestDrawLine(RGB565)foriinrange(10):img=sensor.snapshot()foriinrange(img.width()):c=((i*255)+(img.width()/2))/img.draw_line([i,0,i,img.height()-1],color=[0,#TestDrawLine(RGB565)foriinrange(10):img=sensor.snapshot()foriinrange(img.width()):c=((i*255)+(img.width()/2))/img.draw_line([i,0,i,img.height()-1],color=[0,0,re #ColorBinaryFilterExample#Thisscriptshowsoffthebinaryimagefilter.Thisscriptwasoriginallya#testscript...but,itcanbeusefulforshowinghowtouseimportpyb,sensor,image,#設(shè)置顏色閾值,如果是rgb圖像,六個(gè)數(shù)字分別為(minL,maxLminAmaxAminB,#如果是灰度圖,則只需設(shè)置(min,max)兩個(gè)數(shù)字即可。red_threshold=(0,100, 0,127)#LABgreen_threshold= 0,127)#LAblue_threshold= -128,0)#LA#Testredforiinimg=sensor.snapshot()#image.binary(thresholds,invert=False)此函數(shù)將在thresholds內(nèi)的#的01(黑白)進(jìn)行反轉(zhuǎn),默認(rèn)為false不反轉(zhuǎn)。#Testgreenforiinimg=sensor.snapshot()#Testblueforiinimg=sensor.snapshot()#Testnotredforiinimg=sensor.snapshot()img.binary([red_thresholdinvert1)#Testnotgreenforiinimg=sensor.snapshot()img.binary([green_threshold],invert=1)#Testnotblueforiinimg=sensor.snapshot()img.binary([blue_threshold],invert=1)實(shí)現(xiàn)綠像分割:原圖invertinvert=04- #EdgeDetection##Thisexampledemonstratesusingthemorphfunctiononanimagetodoedge#detectionandthenthresholdingandfilteringthatimageimportsensor,image,kernel_size1kernelwidth(size*2)+1kernelheightkernel=[-1,-1,--1,+8,--1,-1,-#Thisisahighpassfilterkernel.seehereformore#http://w thresholds=[(100,255)]#grayscalethresholds設(shè)置閾值sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.GRAYSCALE)#orsensor.RGB565sensor.set_framesize(sensor.QQVGA)#orsensor.QVGA(orothers)sensor.skip_frames(10)#Letnewsettingstakeaffect.clock=time.clock()#Tracks#OntheOV7725sensor,edgedetectioncanbe#significantlybysettingthesharpness/edgeregisters.#Note:Thiswillbeimplementedasafunctionlater.if(sensor.get_id()==sensor.OV7725):sensor.write_reg(0xAC,0xDF)sensor.write_reg(0x8F,clock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturnthe#morph(size,kernel,mul=Auto,add=0),morph變換,mul根據(jù)圖像對(duì)比度#Erodepixelswithlessthan2neighborsusinga3x3imageimg.erode(1,threshold=#侵蝕函數(shù)erode(sizethreshold=Auto),去除邊緣相鄰處多余的點(diǎn)。thresholdprint(clock.fps())#Note:YourOpenMVCamrunsabouthalfasfastwhile#connectedtoyourcomputer.TheFPSshouldincreaseonce邊緣檢測(cè)后的圖像:img.erode(1,threshold=04-eroe_and_dilte腐蝕膨脹#ErodeandDilateExample#Thisexampleshowsofftheerodeanddilatefunctionswhichyoucanrunon#abinaryimagetoremovenoise.Thisexamplewasoriginallyatestbutits#usefulforshowingoffhowthesefunctionsimportpyb,sensor,grayscale_thres=(170,rgb565_thres=(70,100,-128,127,-128,foriinrange(20):img=#對(duì)圖像邊緣進(jìn)行侵蝕,侵蝕函數(shù)erode(size,threshold=Auto),size為foriinrange(20):img=sensor.snapshot()foriinrange(20):img=foriinimg=sensor.snapshot() #MeanFilterExample#Thisexampleshowsoffmeanfiltering.Meanfilteringisyourstandardaverage#filterinaNxNneighborhood.Meanfilteringremovesnoiseintheimageby#bluringeverything.But,it'sthefastestkernelfilterimportsensor,image,sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.RGB565)#orsensor.GRAYSCALEsensor.set_framesize(sensor.QQVGA)#orsensor.QVGA(orothers)sensor.skip_frames(10)#Letnewsettingstakeaffect.clock=time.clock()#Tracksclock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturnthe#Theonlyargumentisthekernelsize.Ncorespondstoa#kernelsize.E.g.1==3x3kernel,2==5x5kernel,etc.

#shouldn'teverneedtouseavaluebiggerthan2.print(clock.fps())#Note:YourOpenMVCamrunsabouthalfasfastwhile#connectedtoyourcomputer.TheFPSshouldincreaseonce #SharpenFilter##Thisexampledemonstratesusingmorphtosharpenimportsensor,image,kernel_size=1#kernelwidth=(size*2)+1,kernelheight=kernel=[-1,-1,--1,+9,--1,-1,-#Thisisasharpenfiltersensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.GRAYSCALE)#orsensor.RGB565sensor.set_framesize(sensor.QQVGA)#orsensor.QVGA(orothers)sensor.skip_frames(10)#Letnewsettingstakeaffect.clock=time.clock()#Tracksclock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturnthe#Runthekerneloneverypixeloftheimage.img.morph(kernel_size,kernel)print(clock.fps())#Note:YourOpenMVCamrunsabouthalfasfastwhile#connectedtoyourcomputer.TheFPSshouldincreaseonce05-snapshot本例程為05-Snapshot-snapshot.py本例程的目標(biāo)是使用save函數(shù)保存 #SnapshotExample#Note:YouwillneedanSDcardtorunthisexample.#YoucanuseyourOpenMVCamtosaveimageimportsensor,image,pybRED_LED_PIN=1BLUE_LED_PIN=sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.RGB565)#orsensor.GRAYSCALEsensor.set_framesize(sensor.QVGA)#orsensor.QQVGA(orothers)sensor.skip_frames(10)#Letnewsettingstakeaffect.sensor.skip_frames(30)#Givetheusertimetoget#保存截取到的到SDprint("You'reonsensor.snapshot().save("example.jpg")#or"example.bmp"(orothers)print("Done!Resetthecameratoseethesaved 06-gif錄制動(dòng)圖#GIF RecordingExample#Note:YouwillneedanSDcardtorunthisexample.#YoucanuseyourOpenMVCamtorecordgiffiles.Youcaneitherfeedthe#recorderobjectRGB565framesorGrayscaleframes.Usephotoeditingsoftware#likeGIMPtocompressandoptimizetheGifbeforeuploadingittotheweb.importsensor,image,time,gif,RED_LED_PIN=BLUE_LED_PIN=sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.RGB565)#orsensor.GRAYSCALEsensor.set_framesize(sensor.QQVGA)#orsensor.QVGA(orothers)sensor.skip_frames(10)#Letnewsettingstakeaffect.clock=time.clock()#Trackssensor.skip_frames(30Givetheusertimetogetready.g=gif.Gif("example.gif",#gif.Gif(filenamewidth=Autoheight=Autocolor=Autoloop=True)創(chuàng)print("You'reoncamera!")foriinrange(100):#clock.avg()returnsthemillisecondsbetweenframes-gifdelayising.add_frame(sensor.snapshot(),delay=10)#centiseconds.#gif.add_frame(image,delay=10),向gif動(dòng)圖中添加,delay=10指每隔print("Done!Resetthecameratoseethesaved06-mjpeg#MJPEG RecordingExample#Note:YouwillneedanSDcardtorunthisdemo.#YoucanuseyourOpenMVCamtorecordmjpegfiles.Youcaneitherfeedthe#recorderobjectJPEGframesorRGB565/Grayscaleframes.Onceyou've#recordingaMjpegfileyoucanuseVLCto yit.IfyouareonUbuntuthen#thebuilt- yerwillworkimportsensor,image,time,mjpeg,pybRED_LED_PIN=1BLUE_LED_PIN=sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.RGB565)#orsensor.GRAYSCALEsensor.set_framesize(sensor.QVGA)#orsensor.QQVGA(orothers)sensor.skip_frames(10)#Letnewsettingstakeaffect.clock=time.clock()#Trackssensor.skip_frames(30)#Givetheusertimetogetm=#mjpeg.Mjpeg(filenamewidth=Autoheight=Auto)創(chuàng)建一個(gè)mjpeg對(duì)象,print("You'reoncamera!")foriinrange(200):print("Done!Resetthecameratoseethesavedfe_decion#FaceDetectionExample#Thisexampleshowsoffthebuilt-infacedetectionfeatureoftheOpenMVCam.##FacedetectionworksbyusingtheHaarCascadefeaturedetectoronanimage.A#HaarCascadeisaseriesofsimpleareacontrastschecks.Forthe#frontalfacedetectorthereare25stagesofcheckswitheachstage#hundredsofchecksapiece.HaarCascadesrunfastbecauselaterstagesare#onlyevaluatedifpreviousstagespass.Additionally,yourOpenMVCamuses#adatastructurecalledtheintegralimagetoquicklyexecuteeach#contrastcheckinconstanttime(thereasonforfeaturedetection#grayscaleonlyisbecauseofthespacerequirmentfortheintegralimportsensor,time,#Resetsensor#Sensorsettings#HQVGAandGRAYSCALEarethebestforfacetracking.#LoadHaar#Bydefaultthiswilluseallstages,lowersatgesisfasterbutlessaccurate.face_cascade=image.HaarCascade("frontalface",stages=25)#比如“frontalface”人臉模型或者“eye”人眼模型。#FPSclock=while#Captureimg=#Find#Note:Lowerscalefactorscales-downtheimagemoreanddetectssmallerobjects.#Higherthresholdresultsinahigherdetectionrate,withmorefalsepositives.objects=img.find_features(face_cascade,threshold=0.75,#Drawobjectsforrinobjects:#Print#Note:ActualFPSishigher,streamingtheFBmakesit08-faceedeetion人眼追蹤臉識(shí)別,然后利用haar算子找到人臉中的眼睛,實(shí)現(xiàn)人眼追蹤。#IrisDetection2Example#Thisexampleshowshowtofindtheeyegaze(pupildetection)after#theeyesinanimage.Thisscriptusesthefind_eyesfunctionwhich#thecenterpointofroithatshouldcontainapupil.Itdoesthisbybasically#findingthecenterofthedarkestareaintheeyeroiwhichisthepupilcenter.##Note:Thisscriptdoesnotdetectafacefirst,useitwiththeephotolens.importsensor,time,#Resetsensor#Sensorsettings#SetresolutiontoVGA.#Bin/Cropimageto200x100,whichgivesmoredetailswithlessdatatoprocesssensor.set_windowing((220,190,200,100))#LoadHaar#Bydefaultthiswilluseallstages,lowersatgesisfasterbutlessaccurate.eyes_cascade=image.HaarCascade("eye",stages=24)#FPSclock=while#Captureimg=sensor.snapshot()#Findeyes!#Note:Lowerscalefactorscales-downtheimagemoreanddetectssmallerobjects.#Higherthresholdresultsinahigherdetectionrate,withmorefalsepositives.eyesimg.find_features(eyes_cascadethreshold=0.5scale=1.5)#Findforeiniris=中img.draw_cross(iris[0],iris[1])#Print#Note:ActualFPSishigher,streamingtheFBmakesit find_eyefind_eye#IrisDetection2Example#Thisexampleshowshowtofindtheeyegaze(pupildetection)after#theeyesinanimage.Thisscriptusesthefind_eyesfunctionwhich#thecenterpointofroithatshouldcontainapupil.Itdoesby#findingthecenterofthedarkestareaintheeyeroiwhichisthepupilcenter.##Note:Thisscriptdoesnotdetectafacefirst,useitwiththeephotolens.importsensor,time,#Resetsensor#Sensorsettings#SetresolutiontoVGA.#Bin/Cropimageto200x100,whichgivesmoredetailswithlessdatatoprocesssensor.set_windowing((220,190,200,100))#LoadHaar#Bydefaultthiswilluseallstages,lowersatgesisfasterbutlessaccurate.eyes_cascade=image.HaarCascade("eye",stages=24)#FPSclock=while#Captureimg=sensor.snapshot()#Findeyes!#Note:Lowerscalefactorscales-downtheimagemoreanddetectssmallerobjects.#Higherthresholdresultsinahigherdetectionrate,withfalseeyesimg.find_features(eyes_cascadethreshold=0.5scale=1.5)#Findforeiniris=中img.draw_cross(iris[0],iris[1])#Print#Note:ActualFPSishigher,streamingtheFBmakesitcanny是檢測(cè)的效果沒(méi)有morph變換的邊緣檢測(cè)效果好。#Edgedetectionwith##ThisexampledemonstratestheCannyedgeimportsensor,image,sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.GRAYSCALE)#orsensor.RGB565sensor.set_framesize(sensor.QQVGA)#orsensor.QVGA(orothers)sensor.skip_frames(30)#Letnewsettingstakeaffect.clock=time.clock()#Tracksclock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturntheimage.#UseCannyedgedetectorimg.find_edges(image.EDGE_CANNYthreshold=(5080))#Fastersimpleredgedetection#img.find_edges(image.EDGE_SIMPLE,threshold=(100,255))print(clock.fps())#Note:YourOpenMVCamrunsabouthalfasfast#FreakExample#Thisscriptshowsoffkeypointtrackingbyitself.Putanobjectinfrontof#yourOpenMVCamwithoutanythingelseintheimage(i.e.camerashouldbe#facingasmoothwall)andthecamerawilllearnthekeypointsforantrack#whateverobjectisintheimage.Youcansavekeypointstodiskeithervia#theOpenMVIDEorfrominyourscript.#MatchingkeypointsworksbyfirstextractingkeypointsfromanOnce#areextractedthentheOpenMVCamcomparestheextractedkeypointsagainstall#thekeypointsinanimage.Ittriestofindthecenterpoint#thetwosetsofkeypoints.#Keepinmindthatkeypointmatchingwithjustonetrainingexampleisn'tvery#robust.Ifyouwantprofessionalqualityresultsthenstickwith#professionallygeneratedHaarCascadeslikethefrontalfaceoreye#Thatsaid,ifyou'reinaverycontrolledenviromentthenkeypoint#allowsyourOpenMVCamtolearnobjectsonthefly.#If...youwantreallygoodkeypointmatchingresultswesuggestyou#keypointsfromallfacesofanobjectandwithmultiplerotationsandscales.#Comparingagainstallthesessetsofkeypointshelpsversusjust##NOTE:LOTSOFKEYPOINTSMAYCAUSETHESYSTEMTORUNOUTOFimportsensor,time,#Normalizedkeypointsarenotrotation#Keypointextractorthreshold,rangefrom0toany#Thisthresholdisusedwhenextractingkeypoints,thelower#thethresholdthehigherthenumberofkeypoints#Keypoint-levelthreshold,rangefrom0to#Thisthresholdisusedwhenmatchingtwokeypointdescriptors,it's#percentageofthedistancebetweentwodescriptorstothemax#Inotherwords,theminimummatchingpercentagebetween2#Resetsensor#Sensorsettings#Skipafewframestoallowthesensorsettledown#Note:ThistakesmoretimewhenexecfromtheIDE.foriinrange(0,30):img=sensor.snapshot()img.draw_string(0,0,"Pleasewait...")kpts1=#kpts1保存目標(biāo)物體的特征,可以從文件導(dǎo)入特征,kpts1= menttoloadkeypointsfromfile#kpts1=image.load_descriptor(image.FREAK,"/desc.freak")clock=time.clock()whileimg=kpts2=img.find_keypoints(threshold=KEYPOINTS_THRESH,ifkpts1=elifkpts2:c=image.match_descriptor(image.FREAK,kpts1,kpts2,#match_descriptor(typedescritor0descriptor1threshold=60)

#C[2]containsthepercentageofmatchingkeypoints.#Ifmorethan25%ofthekeypointsmatch,drawstuff.if(c[2]>25):img.draw_string(0,10,"Match#Drawimg.draw_string(0,0,09-lines識(shí)別直線變換進(jìn)行直線識(shí)別。以后應(yīng)該會(huì)更新變換識(shí)別形狀的函數(shù)。#CannyEdgeandHoughTransform##ThisexampledemonstratesusingtheCannyedge#AndtheHoughtransformtofindstraightlinesinanimportsensor,image,sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.GRAYSCALE)#orsensor.RGB565sensor.set_framesize(sensor.QQVGA)#orsensor.QVGA(orothers)clock=time.clock()#TracksFPS.clock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturntheimage.img.find_edges(image.EDGE_CANNYthreshold=(5080))Findedgeslinesimg.find_lines(threshold=50Findlines.forlinimg.draw_line(l,color=(127))#Draw #OpticalFlowExample#YourOpenMVCamcanuseopticalflowtodeterminethediscement#twoimages.ThisallowsyourOpenMVCamtotrackmovementlikehowyourlaser#mousetracksmovement.Bytackingthedifferencebetweensuccessive#youcandetermineinstaneousdiscementwithyourOpenMVCamimportsensor,image,sensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.GRAYSCALE)#orsensor.GRAYSCALEsensor.set_framesize(sensor.B64x32)#orB40x30orB64x64clock=time.clock()#TracksFPS.#NOTE:Thefind_dis cementfunctionworksbytakingthe2DFFTsoftheold#andnewimagesandcomparesthemusingphasecorrelation.YourOpenMVCam#onlyhasenoughmemorytoworkontwo64x64FFTs(or128x32,32x128,oretc).oldsensor.snapshot()#先獲取一張clock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturnthe[delta_x,delta_y,response]=old.find_dis #delta_xdelta_yxy方向上,這一幀圖像相比上幀圖像移動(dòng)的像素?cái)?shù)目。old=print("%0.1fX\t%0.1fY\t%0.2fQoR\t%0.2fFPS"%\delta_y,response,09- 別#QRCodeExample#ThisexampleshowsthepoweroftheOpenMVCamtodetectQRCodes#withoutneedinglenscorrection.importsensor,image,sensor.set_windowing((240,240))#lookatcenter240x240pixelstheVGAresolution.sensor.set_auto_gain(False)#mustturnthisofftopreventimageclock=img=forcodeinimg.find_qrcodes(): 09- e_atcig本例程為09-Feathre_detection-temte_matching本例程的目標(biāo)是用#FreakExample#Thisscriptshowsoffkeypointtrackingbyitself.Putanobjectinfrontof#yourOpenMVCamwithoutanythingelseintheimage(i.e.camerashouldbe#facingasmoothwall)andthecamerawilllearnthekeypointsforantrack#whateverobjectisintheimage.Youcansavekeypointstodiskeithervia#theOpenMVIDEorfrominyourscript.#MatchingkeypointsworksbyfirstextractingkeypointsfromanOnce#areextractedthentheOpenMVCamcomparestheextractedkeypointsagainstall#thekeypointsinanimage.Ittriestofindthecentermatchingpointbetween#thetwosetsofkeypoints.#Keepinmindthatkeypointmatchingwithjustonetrainingexampleisn'tvery#robust.Ifyouwantprofessionalqualityresultsthenstickwith#professionallygeneratedHaarCascadeslikethefrontalfaceoreye#Thatsaid,ifyou'reinaverycontrolledenviromentthenkeypoint#allowsyourOpenMVCamtolearnobjectsonthefly.#If...youwantreallygoodkeypointmatchingresultswesuggest#keypointsfromallfacesofanobjectandwithmultiplerotationsandscales.#Comparingagainstallthesessetsofkeypointshelpsversusjust##NOTE:LOTSOFKEYPOINTSMAYCAUSETHESYSTEMTORUNOUTOFimportsensor,time,#Normalizedkeypointsarenotrotation#Keypointextractorthreshold,rangefrom0toany#Thisthresholdisusedwhenextractingkeypoints,thelower#thethresholdthehigherthenumberofkeypoints#Keypoint-levelthreshold,rangefrom0to#Thisthresholdisusedwhenmatchingtwokeypointdescriptors,it's#percentageofthedistancebetweentwodescriptorstothemax#Inotherwords,theminimummatchingpercentagebetween2#Resetsensor#Sensorsettings#Skipafewframestoallowthesensorsettledown#Note:ThistakesmoretimewhenexecfromtheIDE.foriinrange(0,30):img=sensor.snapshot()img.draw_string(0,0,"Pleasewait...")kpts1=#kpts1保存目標(biāo)物體的特征,可以從文件導(dǎo)入特征,kpts1= menttoloadkeypointsfromfile#kpts1=image.load_descriptor(image.FREAK,"/desc.freak")clock=time.clock()whileimg=kpts2=img.find_keypoints(threshold=KEYPOINTS_THRESH,ifkpts1=elifkpts2:c=image.match_descriptor(image.FREAK,kpts1,#match_descriptor(typedescritor0descriptor1threshold=60)

#C[2]containsthepercentageofmatchingkeypoints.#Ifmorethan25%ofthekeypointsmatch,drawstuff.if(c[2]>25):

img.draw_string(0,10,"Match%d%%"%(c[2]))#Drawimg.draw_string(0,0,上sd卡后進(jìn)行下列步驟。(注意先插sd卡再上電哦)而且此模板匹配只能用于1.6及以上版本的固件哦,否則運(yùn)行時(shí)會(huì)提示“cannotfind取一個(gè)模板圖像,可以先運(yùn)行oworld.py例程,讓frambuffer顯示出圖10-b_do多顏色識(shí)別#BlobDetectionExample#Thisexampleshowsoffhowtousethefind_blobsfunctiontofind#blobsintheimage.Thisexampleinparticularlooksfordarkgreenimportsensor,image,#Forcolortrackingtoworkreallywellyoushouldideallybeinavery,very,#very,controlledenviromentwherethelightingisconstant... =(0, 80,-70, #設(shè)置綠色的閾值,括號(hào)里面的數(shù)值分別是LAB的最大值和最小值(minLmaxLminA,#maxA,minB,maxB),LAB的值在圖像左側(cè)三個(gè)坐標(biāo)圖中選取。如果是灰度圖,則只需#設(shè)置(min,max)兩個(gè)數(shù)字即可。red_threshold155040802060)#Youmayneedtotweaktheabovesettingsfortrackinggreen#SelectanareaintheFramebuffertocopythecolorsensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.RGB565)#useRGB565.sensor.set_framesize(sensor.QQVGA)#useQQVGAforspeed.sensor.skip_frames(10)#Letnewsettingstakeaffect.sensor.set_auto_whitebal(False)#turnthisoff.clock=time.clock()#TracksFPS.clock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturnthegreen_blobs=img.find_blobs([green_threshold]),,不反轉(zhuǎn)。roi設(shè)置顏色識(shí)別的視野區(qū)域,roiroi(xywh),代表從ifgreen_blobs:forbin#Drawarectaroundtheblob.img.draw_rectangle(b[0:4],color=(255,255,255rectimg.draw_cross(b[5b[6cxcyred_blobs=img.find_blobs([red_threshold])ifred_blobs:forbin#Drawarectaroundtheblob.img.draw_rectangle(b[0:4],color=(0,0,0rectimg.draw_cross(b[5],b[6])#cx,print(clock.fps())#Note:YourOpenMVCamrunsabouthalfasfastwhile#connectedtoyourcomputer.TheFPSshouldincreaseonce10-blob_detion顏色識(shí)別現(xiàn)顏色識(shí)別。openmv可以多個(gè)顏色同時(shí)識(shí)別。#BlobDetectionExample#Thisexampleshowsoffhowtousethefind_blobsfunctiontofind#blobsintheimage.Thisexampleinparticularlooksfordarkgreenimportsensor,image,#Forcolortrackingtoworkreallywellyoushouldideallybeinavery,very,#very,controlledenviromentwherethelightingisconstant... =(0, 80,-70, #設(shè)置綠色的閾值,括號(hào)里面的數(shù)值分別是LAB的最大值和最小值(minLmaxLminA,#maxA,minB,maxB),LAB的值在圖像左側(cè)三個(gè)坐標(biāo)圖中選取。如果是灰度圖,則只需#設(shè)置(min,max)兩個(gè)數(shù)字即可。#Youmayneedtotweaktheabovesettingsfortrackinggreen#SelectanareaintheFramebuffertocopythecolorsensor.reset()#Initializethecamerasensor.sensor.set_pixformat(sensor.RGB565)#useRGB565.sensor.set_framesize(sensor.QQVGA)#useQQVGAforspeed.sensor.skip_frames(10)#Letnewsettingstakeaffect.sensor.set_auto_whitebal(False)#turnthisoff.clock=time.clock()#TracksFPS.clock.tick()#Trackelapsedmillisecondsbetweensnapshots().img=sensor.snapshot()#Takeapictureandreturntheblobs=img.find_blobs([green_threshold])#find_blobs(thresholds,invert=False,roi=Auto),thresholds為顏色閾認(rèn)#不反轉(zhuǎn)。roi設(shè)置顏色識(shí)別的視野區(qū)域,roiroi(xyw,h)表ifforbin#Drawarectaroundtheblob.img.draw_rectangle(b[0:4])#rectimg.draw_cross(b[5b[6cxcyprint(b[5],b[6])print(clock.fps())#Note:YourOpenMVCamrunsabouthalfasfastwhile#connectedtoyourcomputer.TheFPSshouldincreaseonce即(0,50,55,0,0, 機(jī)器人巡線10-Color_Ttracking-line_flowing#BlackGrayscaleLineFollowingExample#Makingalinefollowingrobotrequiresalotofeffort.Thisexamplescript#showshowtodothemachinevisionpartofthelinefollowingrobot.You#canusetheoutputfromthisscripttodriveadifferentialdriverobotto#followaline.Thisscriptjustgeneratesasingleturnvaluethat#yourrobottogoleftorright.#Forthisscripttoworkproperlyyoushouldpointthecameraatalineata#45orsodegre

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論