翻譯以.原文和在同一文件中前_第1頁
翻譯以.原文和在同一文件中前_第2頁
翻譯以.原文和在同一文件中前_第3頁
翻譯以.原文和在同一文件中前_第4頁
翻譯以.原文和在同一文件中前_第5頁
已閱讀5頁,還剩27頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

網(wǎng)格顯著作者:ChangHaLeeAmitabhVarshneyDavidW近十年的研究已經(jīng)為圖形和幾何建模中的三維網(wǎng)格的表示和分析奠定了堅在平均曲率中使用邊緣算子來定義網(wǎng)格顯著性。我們發(fā)現(xiàn)這對于格著性定義以捉到些在網(wǎng)中感的區(qū)。較類別和描述:I.3.5[計算機圖形學]:計算幾何學和物體建模;I.3.m[計算機圖形學]:感知;I.3.m[計算機圖形學]應用:顯著性,視覺注意,感知,簡化,視點選 引效,漸進式表達[Hoppe1996;KarniandGotsman2000],分析[Taubin1995;Kobbelt等.1998;Meyer等人2003],傳輸[Al-Regib等.2005],還有大規(guī)模網(wǎng)格渲染[Luebke等.2003]。大部分工作集中在使用圖形的數(shù)學方法,如曲率。圖形的工作對結(jié)合感知理念和網(wǎng)格繪制中的層次細節(jié)管理進行闡述,[Lue-bkeandHallen2001;Reddy.2001;Watson等.2004],對于感知啟發(fā)的網(wǎng)格處理度量卻較 1:網(wǎng)格顯著性:(a)Armadillo模型,(b)顯示的是其網(wǎng)格幾何圖形方法,如曲率在網(wǎng)格處理的文獻中有較多的使用。例如,Heckbert和 相關(guān)工提出。KochUllman[1985]的早期模型表明,顯著性圖像位置可以從其周圍的[]il用當中,二維顯著性圖已經(jīng)應用于選擇性壓縮[PriviteraStark1999]。DeCarlo和Sanla[2002]使用從人眼運動中確定的顯著性來簡化圖像,產(chǎn)生非真實感的最近,顯著性算法已經(jīng)應用到三維模型的顯示。Yee等人[2001]使用Itti等人來確定在精確渲染時在何處集中計算資源。Mantiuk等人[2003]利用實時二維顯著性算法來指導三維場景動畫的MPEG壓縮。Frintrop等人[2004]使用顯著性圖來加維數(shù)據(jù)物體的檢測速度他們結(jié)合了從代表景深和強度的二維圖像計算 現(xiàn)有的工作通過尋找三維模型的二維投影中的顯著性來確定三維模型的顯著性。僅有很少的直接從三維結(jié)構(gòu)中提取顯著性的工作。GuyMdioni[199]提出了一種在二維圖像邊緣計算顯著性圖的方法,(基于邊緣的顯著性圖早期是ShahuaUllmn發(fā)現(xiàn)的)。[dioniGuy1997]平面。他們沒有計算模擬的三維物體顯著性圖。tnbelv[2001](通常是沿山脊和峽谷)。Hida等人[2002]提通算骨到骨架邊緣和相關(guān)表面點的非流行點來檢測突出的山脊和溝壑。 網(wǎng)格顯著性計曲率下使用中心-邊緣機制來表述網(wǎng)格顯著性。圖4給出了顯著性計算的概述。vC為網(wǎng)格中每個頂點到其平均曲率的一域 , 為距離頂點 v 遠的點??梢允褂脦追N距離函數(shù)來定義領(lǐng),如短程線或得。兩種方法我們都嘗試過,發(fā)現(xiàn)得距離使,)=|x?v<,x為一個網(wǎng)格集合},令G(C(v),)表示平均曲率的平均值。注意在上述公式中,我們假設(shè)在距離為2處有一個過濾器。我們計一個頂點v的顯著性S(v)為所計算的精細尺度和粗略尺度的平均之間過濾的平均曲率之差為其顯著性。對于每個計算,我們在半徑為2的區(qū)域計算頂點曲vi下的顯著性為Si(v):Si(v)=|G(C(v),i)?G(C(v),其中,i是在i尺度下的過濾標準差。本文中所有結(jié)果都是使用五種尺為了結(jié)合不同尺度下的顯著性圖Si,我們應用一個非線性抑制算子S,類似于Itti等人[1998]算子。這個抑制算子用少量來提升顯著性圖,并用Si與(Mimˉi)2相乘,在應用非線性規(guī)范化抑制之后將所有尺度下的顯著性相加,得到最終的網(wǎng)格顯著性S:S=∑i 顯著性簡許多文獻是關(guān)于使用多樣化誤差度量和簡化運算來實現(xiàn)網(wǎng)格簡化[Luebke等2003]。一些簡化方法使用網(wǎng)格曲率的估計值,來引導簡化流程,并提高對于給定三角形的幾何真度[Turk1992;Kim等人2002]。其他簡化方法,比如和Belyaev[2001]認為曲率的極值為顯著特征,并且在簡化過程中更好地保留。為了評估我們顯著性方法的有效GarlandHeckbert[1997]的二著性與其他任何網(wǎng)格簡化方案相結(jié)合應該是很簡單的。GarlandHeckbert的簡vpvpvTQpv計算。VQ二次型是所有相鄰二次曲面的和:Q=∑p∈PQp。計算完所有頂點的二次曲面后,對于每個頂點對(vi,vj)計算最佳的收縮點v,能夠最小化二次型誤差ˉT(Qi+Qj)vˉ,其中相加得到:Qi+Qj。SW來確定簡化收縮的順序。保留的時間長。具體來說,我們定義一個顯著性放大因子A,閾值為,放大參數(shù)為 ,大于等于的顯著性值放大倍。因此,指定使用顯著性放大因?qū)τ诒疚闹兴谢陲@著性的簡化結(jié)果,使用=100,=30th百分比的顯著v計算二Q的初始階段Q乘以v的簡化權(quán)重W(v):Q←W(v)Q。類似于在一個頂點對后二次型的計算,新頂點v的簡化權(quán)重W(v)為兩個頂點的權(quán)重之和W(vi)+W(vj)?,F(xiàn)即使我們直接把顯著性當成權(quán)重因子,而不使用放大因子,比如1時,興域,讓簡化過程更專注于這些選定的區(qū)域。我們使用3來進行模糊化處理,WA(G(S3))8顯示了這一過程。我們只計算一次顯著性圖,在簡 顯著視點選許多已經(jīng)解決了自動尋找查看物體的視點的問題。Kamada和Kawai[1988]StoevStraber[2002]考慮一種更適合查看地形的方法,其中大多數(shù)表面法向量是相似的,可見的景深也是最大的。在計算機視覺的背景下,Weinshall和見性(以及閉塞性)是影響優(yōu)選視圖選擇的因一。Gooch等[2001]利用藝術(shù)靈感和由Blanz等人[1999]因素,創(chuàng)建出了一套可以自動計算三維物體初始(x)vmargmaxU(v)10所示。然而,此方法的運算量會化變量是經(jīng)度和緯度,(,),目標函數(shù)是可視顯著性U(,)。我們從隨機的隨機選擇的開始點找到全局最大值。圖11為這種方法對于斯坦福模型的處 11:圖(a)為通過最大可視顯著性所選擇的視點,圖(d)為通過最大平均曲率所選擇的 結(jié)果與討9中的模型。注意我們的基于顯著性的簡化方法比先前的方法保留了耳朵、狳腿的簡化(圖7)或在視點選擇時無視的頭發(fā)(圖11)。13.0GHz2GB內(nèi)存的PC上使用規(guī)則網(wǎng)格進行顯著性計算的運行時間。 結(jié)論和未來工 致感謝他們在顯著改善本文質(zhì)量所花費的時間和精力。感謝YoungminKim的贊助:IIS00-81847,ITR03-25867,CCF04-29753,andCNS04-03313。 參考文 Error-resilienttransmissionof3Dmodels.ACMTransactionsonGraphics24,2,182–208.[2]BLANZ,V.,TARR,M.J.,ANDBU¨LTHOFF,H.H.1999. objectattributesdeterminecanonicalviews?Perception28,5,575–599.[3]CHEN,L.,XIE,X.,FAN,X.,MA,W.,ZHANG,H.,ANDZHOU,H.2003.Avisualattentionmodelforadaptingimagesonsmalldisplays.ACMMultimediaSystemsJournal9,4,353–364.DECARLO,D.,ANDSAN LA,A.2002.Stylizationandab-stractionofphotographs.ACMTransactionsonGraphics(Pro-ceedingsofACMSIGGRAPH2002)21,3,769–776.FLEISHMAN,S.,COHEN-OR,D.,ANDLISCHINSKI, Automaticcameraplacementforimage-basedmodeling.InPro-ceedingsofthe7thPacificConferenceonComputerGraphicsandApplications(PG1999),12–20. sualattentionforobjectrecognitioninspatial3Ddata.In2ndInternationalWorkshoponAttentionandPerformanceinCom-putationalVision(WAPCV2004),75–82.[7]GARLAND,M.,ANDHECKBERT,P.1997.Surfacesimplificationusingquadricerrormetrics.InProceedingsofACMSIGGRAPH,209–216.2001.Artisticcompositionforimagecreation.InProceedingsofEurographicsWorkshoponRenderingTechniques,83–88.GUY,G.,ANDMEDIONI,G.1996.Inferringglobalperceptualcontoursfromlocalfeatures.InternationalJournalofComputerVision20,1–2,113–133. Optimallationandquadric-basedsurfacesimplification.ComputationalGeometry14,49–65.[11]HISADA,M.,BELYAEV,A.G.,ANDKUNII,T.L. skeleton-basedapproachfordetectionofperceptuallysalientfeaturesonpolygonalComputerGraphicsForum21,4,[12]HOPPE,H.1996.Progressivemeshes.ProceedingsofACMSIG-GRAPH,99–108.[13]HOWLETT,S.,HAMILL,J.,ANDO’SULLIVAN,C.2004.Anex-perimentalapproachtopredictingsaliencyforsimplifiedpolyg-onalmodels.InProceedingsofthe1stSymposiumonAppliedPerceptioninGraphicsandVisualization,57–64.ITTI,L.,KOCH,C.,ANDNIEBUR,E.1998.Amodelofsaliency-basedvisualattentionforrapidsceneysis.IEEETrans-actionsonPatternysisandMachineInligence20,11,1254–1259.KAMADA,T.,ANDKAWAI,S.1988.Asimplemethodforcom-putinggeneralpositionindisplayingthree-dimensionalobjects.ComputerVision,Graphics,ImageProcessing41,1,KARNI,Z.,ANDGOTSMAN,C.2000.Spectralcompressionofmeshgeometry.InProceedingsofACMSIGGRAPH,279–286.KATZ,S.,ANDTAL,A.2003.Hierarchicalmesh positionusingfuzzyclusteringandcuts.ACMTransactionsonGraphics(ProceedingsofACMSIGGRAPH2003)22,3.[18]KIM,S.-J.,KIM,S.-K.,ANDKIM,C.-H. Discretedif-ferentialerrormetricforsurfacesimplification.InProceedingsof10thPacificConferenceonComputerGraphicsandApplica-tions(PG2002),276–283.1998.Interactivemulti-resolutionmodelingonarbitrarymeshes.InProceedingsofACMSIGGRAPH,105–114.KOCH,C.,ANDULLMAN,S.1985.Shiftsinselectivevisualattention:towardstheunderlyingneuralcircuitry.HumanNeu-robiology4,219–227.Lightingdesignforeffectivevisualization.InProceedingsofIEEEVisualization,281–288.[22]LUEBKE,D.,ANDHALLEN,B.2001.Perceptuallydrivensimpli-ficationforinteractiverendering.InProceedingsofEurograph-icsWorkshoponRenderingTechniques,223–234.B.,ANDHUEBNER,R.2003.LevelofDetailfor3DGraphics.Morgan [24]MANTIUK,R.,MYSZKOWSKI,K.,ANDPATTANAIK,S.2003.AttentionguidedMPEGcompressionforcomputeranimations.InProceedingsofthe19thSpringConferenceonComputerGraphics,239–244.MEDIONI,G.,ANDGUY,G.1997.Inferenceofsurfaces,curvesandjunctionsfromsparse,noisy3-Ddata.IEEETransactionsonPatternysisandMachineInligence19,11,1265–1277.MEYER,M.,DESBRUN,M.,SCHRO¨DER,P.,ANDBARR,A.2003.Discretedifferential-geometryoperatorsfortriangulated2-manifolds.InVisualizationandMathematicsIII(ProceedingsofVisMath2002),SpringerVerlag,Berlin(Germany),35–MILANESE,R.,WECHSLER,H.,GIL,S.,BOST,J.,ANDT.1994.Integrationofbottom-upandtop-downcuesforvisualattentionusingnon-linearrelaxation.InProceedingsofIEEEComputerVisionandPatternRecognition,781–785.[28]PRIVITERA,C.,ANDSTARK,L.1999. FocusedJPEGencod-ingbaseduponautomaticpreidentifiedregionsofinterest.InProceedingsofSPIE,HumanVisionandElectronicImagingIV,552–558.REDDY.,M.2001.Perceptuallyoptimized3Dgraphics.IEEEComputerGraphicsandApplications21,5,68–75.ROSENHOLTZ,R.1999.Asimplesaliencymodelpredictsanum-berofmotionpopoutphenomena.VisionResearch39,19,3157–3163.SHASHUA,A.,ANDULLMAN,S.1988.Structuralsaliency:Thedetectionofgloballysalientstructuresusingalocallyconnectednetwork.InProceedingsofIEEEInternationalConferenceonComputerVision,321–327.STOEV,S.,ANDSTRA?ER,W.2002.Acasestudyonautomaticcameraplacementandmotionforvisualizinghistoricaldata.InProceedingsofIEEEVisualization,545–548.[33]SUH,B.,LING,H.,BEDERSON,B.B.,AND JACOBS,D.W.2003.Automaticthumbnailcropanditseffectiveness.CHILetters(UIST2003)5,TAUBIN,G.1995.Estimatingthetensorofcurvatureofasur-facefromapolyhedralapproximation.InProceedingsofIEEEInternationalConferenceonComputerVision,902–NUFLO,F.1995.Modelingvisual-attentionviaselectivetuning.ArtificialInligence78,1-2,507–545.TURK,G.1992.Re-tilingpolygonsurfaces. InProceedingsofACMSIGGRAPH, 2002.Viewpointentropy: anewtoolforobtaininggoodviewsofmolecules.InProceedingsoftheSymposiumonDataVisual-isation(VISSYM2002),183–188.polygonalsurfaces.ComputerGraphicsForum(Eurographics2001)20,3,385–392.[39]WATSON,B.,WALKER,N.,ANDHODGES,L.F.2004.Supra-thresholdcontrolof peripheralLOD.ACMTransactionsonGraphics23,3,750–759.[40]WEINSHALL,D.,ANDWERMAN,M.1997.Onviewlikelihoodandstability.IEEETransactionsonPatternysisandMa-chineInligence19,2,97–108.tiotemporalsensitivityandvisualattentionforefficientrenderingofdynamicenvironments.ACMTransactionsonGraphics20,1,39–65.MeshChangHaLee AmitabhVarshney DepartmentofComputerScienceResearchoverthelastdecadehasbuiltasolidmathematicalfoun-dationforrepresentationandysisof3Dmeshesingraphicsandincorporatemodelsoflow-levelhumanvisualattention.Inthispa-perweintroducetheideaofmeshsaliencyasameasureofregionalimportanceforgraphicsmeshes.Ournotionofsaliencyisinspiredbylow-levelhumanvisualsystemcues.Wedefinemeshsaliencyinascale-dependentmannerusingacenter-surroundoperatoronGaussian-weightedmeancurvatures.Weobservethatsuchadefi-nitionofmeshsaliencyisabletocapturewhatmostwouldclassifyasvisuallyinterestingregionsonamesh.Thehuman-perception-inspiredimportancemeasurecomputedbyourmeshsaliencyop-eratorresultsinmorevisuallypleasingresultsinprocessingandviewingof3Dmeshes,comparedtousingapurelygeometricmea-sureofshape,suchascurvature.Wediscusshowmeshsaliencycanbeincorporatedingraphicsapplicationssuchasmeshsimpli-ficationandviewpointselectionandpresentexamplesthatshowvisuallyappealingresultsfromusingmeshsaliency.CRCategories:I.3.5[ComputerGraphics]:ComputationalGe-ometryandObjectModeling;I.3.m[ComputerGraphics]:Percep-tion;I.3.m[ComputerGraphics]:Applications:saliency,visualattention,perception,simplification,viewpointselectionWehavewitnessedsignificantadvancesinthetheoryandpracticeof3Dgraphicsmeshesoverthelastdecade.Theseadvancesin-cludeefficientandprogressiverepresentation[Hoppe1996;KarniandGotsman2000],ysis[Taubin1995;Kobbeltetal.1998;Meyeretal.2003],transmission[Al-Regibetal.2005],andren-dering[Luebkeetal.2003]ofverylargemeshes.Muchofthisworkhasfocussedonusingmathematicalmeasuresofshape,suchascurvature.Therapidgrowthinthenumberandqualityofgraph-icsmeshesandtheirubiquitoususeinalargenumberofhuman-centeredvisualcomputingapplications,suggesttheneedforincor-poratinginsightsfromhumanperceptionintomeshprocessing.Al-thoughexcellentworkhasbeendoneinincorporatingprinciplesofperceptioninmanaginglevelofdetailforrenderingmeshes[Lue-bkeandHallen2001;Reddy.2001;Watsonetal.2004],therehasbeenlessattentionpaidtotheuseofperception-inspiredmetricsforprocessingofmeshes.Permissiontomakedigitalorhardcopiesopartorallothisworkor useisgrantedwithouteeprovidedthatcopiesarenotmadeordistributedorcommercialadvantageandthatcopiesbearthisnoticeandtheullcitationontheirstpageCopyrightsorcomponentsothisworkownedbyothersthanACMmustbehonored creditispermittedocopyotherwise,torepublish,topostonservers,ortoredistributetolists,requirespriorspeciicpermissionand/oraeeRequestpermissionsromPermissionsDept,ACMnc,ax+1(212)869-0481or permissions@acmorg?2005ACM0730-0301/05/0700-0659$5

Figure1:MeshSaliency:Image(a)showstheStanfordArmadillomodel,andimage(b)showsitsmeshsaliency.Ourgoalinthispaperistobringperception-basedmetricstobearontheproblemofprocessingandviewing3Dmeshes.Purelyge-ometricmeasuresofshapesuchascurvaturehavearichhistoryofuseinthemeshprocessingliterature.Forinstance,HeckbertandGarland[1999]showthattheirquadricerrormetricisdirectlyre-latedtothesurfacecurvature.MeshsimplificationsresultingfromminimizingthequadricerrorsresultinprovablyoptimumaspectratiooftrianglesintheL2norm,asthetriangleareasapproachzero.However,apurelycurvature-basedmetricmaynotneces-sarilybeagoodmetricofperceptualimportance.Forexample,ahigh-curvatureeinthemiddleofalargelyflatregionwillbelikelyperceivedtobeimportant.However,itisalsolikelythataflatregioninthemiddleofdenselyrepeatedhigh-curvaturebumpswillbeperceivedtobeimportantaswell.Repeatedpatterns,evenifhighincurvature,arevisuallymonotonous.Itistheunusualorunexpectedthatdelightsandinterests.Asanexample,thetexturedregionwithrepeatedbumpsinthelegoftheArmadilloshowninFigure2(a)isarguablyvisuallylessinterestingthananisolatedbutsmoothfeaturesuchasitsknee(Figure2(c)).Inthispaper,weintroducetheconceptofmeshsaliency,amea-sureofregionalimportance,for3Dmeshes,andpresentamethodtocomputeit.Ourmethodtocomputemeshsaliencyusesacenter-surroundmechanism.Weusethecenter-surroundmechanismbe-causeithastheintuitiveappealofbeingabletoidentifyregionsthataredifferentfromtheirsurroundingcontext.Wearealsoen-couragedbythesuccessofthesemechanismson2Dproblems.Weexpectagoodmodelofsaliencytooperateatmultiplescales,sincewhatisinterestingatonescaleneednotremainsoatadif-ferentscale.Agoodsaliencymapshouldcapturetheinterestingfeaturesatallperceptuallymeaningfulscales.Figure3(a)showsasaliencymapatafinescalewheresmallfeaturessuchasthenoseandmouthhavehighsaliency,whileasaliencymapatalargerscale(Figure3(b))showstheeyetohaveahighersaliency.Weusetheseobservationstodefineamulti-scalemodelofmeshsaliencyusingthecenter-surroundmechanisminSection3.Anumberoftasksingraphicscanbenefitfromacomputationalmodelofmeshsaliency. Figure2:Curvaturealoneisinadequateforassessingsaliencysinceitdoesnotadequayconsiderthelocalcontext.Image(a)showsapartoftherightlegoftheStanfordArmadillomodel.Imagevisualizesthemagnitudeofmeancurvaturesand(c)showsoursaliencyvalues.While(b)capturesrepeatedtexturesandfailstocapturetheknee,(c)successfullyhighlightstheknee.InthispaperweexploretheapplicationofmeshsaliencytomeshsimplificationandviewselectioninSections4and5.SaliencyComputation:Therecanbeanumberofdefini-tionsofsaliencyformeshes.WeoutlineonesuethodforgraphicsmeshesbasedontheGaussian-weightedcenter-surroundevaluationofsurfacecurvatures.Ourmethodhasgivenusverypromisingresultsonseveral3Dmeshes.SalientSimplification:Wediscusshowtraditionalmeshsim-plificationmethodscanbemodifiedto modatesaliencyinthesimplificationprocess.Ourresultsshowthatsaliency-guidedsimplificationcaneasilypreservevisuallysalientre-gionsinmeshesthatconventionalsimplificationmethodstyp-icallydonot.SalientViewpointSelection:Asdatabasesof3Dmodelsevolvetoverylargecollections,it esimportanttoau-tomaticallyselectviewpointsthatcapturethemostsalientat-tributesofobjects.Wepresentasaliency-guidedmethodforviewpointselectionthat izesvisiblesaliency.Weforeseethecomputationanduseofmeshsaliencyasanincreas-inglyimportantareain3Dgraphics.Asweengageinimagesynthe-sisandysisforeverlargergraphicsdatasetsandasthegapbe-tweenprocessingcapabilitiesandmemory-accesstimesgrowseverwider,theneedforprioritizingandselectivelyprocessinggraphicsdatasetswillincrease.Saliencycanprovideaneffectivetooltohelpachievethis.RelatedLow-levelcuesinfluencewhereinanimagepeoplewilllookandpayattention.Manycomputationalmodelsofthishavebeenpro-posed.KochandUllman’s[1985]earlymodelsuggestedthatsalientimagelocationswillbedistinctfromtheirsurroundings.OurapproachisexplicitlybasedonthemodelofIttietal.[1998].Theycombineinformationfromcenter-surroundmechanismsappliedtodifferentfeaturemaps,computedatdifferentscales,tocomputeasaliencymapthatassignsasaliencyvaluetoeachimagepixel.Tsotsosetal.[1995],Milaneseetal.[1994],Rosenholtz[1999],andmanyothersdescribeotherinterestingsaliencymodels.Amongtheirmanyapplications,2Dsaliencymapshavebeenappliedtose-lectivelycompress[PriviteraandStark1999]orshrink[Chenetal.2003;Suhetal.2003]images.DeCarloandSanla[2002]usesaliencydeterminedfroma’seyemovementstosimplifyanimageproducinganon-photorealistic,painterlyrendering.

Figure3:Saliencyisrelativetothescale.Image(a)showsthesaliencymapoftheCyberwareDinosaurheadatasmallscale,andimage(b)showsthemapofitssaliencyatalargerscale.Inimage(a),thesmall-scalesaliencyhighlightsthesmallfeaturessuchasnoseandmouthandinimage(b),thelarge-scalesaliencyidentifiesalargerfeaturesuchastheeye.Morerecently,saliencyalgorithmshavebeenappliedtoviewsof3Dmodels.Yeeetal.[2001]useIttietal.’salgorithmtocom-puteasaliencymapofacoarselyrendered2Dprojectionofa3Ddynamicscene.Theyusethistohelpdecidewheretofocuscom-putationalresourcesinproducingamoreaccuraterendering.Man-tiuketal.[2003]useareal-time,2DsaliencyalgorithmtoguideMPEGcompressionofananimationofa3Dscene.Frintropetal.[2004]useasaliencymaptospeedupthedetectionofobjectsin3Ddata.Theycombinesaliencymapscomputedfrom2Dimagesrepresentingscenedepthandintensities.Howlett[2004]demon-stratethepotentialvalueofsaliencyforthesimplificationof3Dmodels.Theirworkcapturessaliencybyusinganeye-trackertorecordwherea haslookedata2Dimageofa3Dmodel.Thesepriorworksdeterminesaliencyfora3Dmodelbyfindingsaliencyinits2Dprojection.Thereislittleworkthatdeterminessaliencydirectlyfrom3Dstructure.GuyandMedioni[1996]pro-posedamethodforcomputingasaliencymapforedgesina2Dimage,(suchedge-basedsaliencymapswerepreviouslyexploredbyShashuaandUllman[1988]).In[MedioniandGuy1997]theyismainlydesignedtosmoothlyinterpolatesparse,noisy3Ddatatofindsurfaces.Theydonotcomputeanogtothesaliencymapfora3Dobject.WatanabeandBelyaev[2001]haveproposedamethodtoidentifyregionsinmesheswhereprincipalcurvatureshavelocallyalvaluesalongoneoftheprincipaldirections(typicallyalongridgesandravines).Hisadaetal.[2002]havepro-posedamethodtodetectsalientridgesandravinesbycomputingthe3Dskeletonandfindingnon-manifoldpointsontheskeletaledgesandassociatedsurfacepoints.MeshSaliencyIttietal.[1998]’smethodisoneofthemosteffectivetechniquesforcomputingsaliencyfor2Dimages.Ourmethodforcomput-ingsaliencyfor3Dmeshesusestheircenter-surroundoperation.Unlikeimages,wherecoloristhemostimportantattribute,wecon-sidergeometryofmeshestobethemostimportantcontributortosaliency.Atpresentourmethodformeshsaliencyusesonlygeom-etry,butitshouldbeeasytoincorporateothersurfaceappearanceattributesintoitaswell.Thereareseveralpossiblecharacteristicsofmeshgeometrythatcouldbeusedforsaliency.Beforewedecideononeletuscomparethedesiderataofsaliencyina2Dimagewiththesaliencyofa3Dobject.Zerosaliencyinanimagecorrespondsthekeyimagepropertywhosevariationsarecriticalistheintensity.Inanimage,intensityisafunctionofshapeandlighting.For3Dobjectshowever,wehavetheopportunitytodeterminethesaliencybasedonshape,independentoflighting.For3Dobjects,wefeelthatasphereisthecanonicalzero-saliencyfeature.Thisisinspiteofthefactthatdependingonthelighting,aspheremaynotproduceauniformintensityimage.Inthecaseofthespherethepropertythatisinvariantisthecurvature.Thereforeweareguidedbytheintuitionthatitischangesinthecurvaturethatleadtosaliencyornon-saliency.Thishasledustoformulatemeshsaliencyintermsofthemeancurvatureusedwiththecenter-surroundmechanism.Figure4givesanoverviewofoursaliencycomputation.Thefirststepofoursaliencycomputationinvolvescomputingsur-facecurvatures.Thereareanumberofexcellentapproachesthatgeneralizedifferential-geometry-baseddefinitionofcurvaturestodiscretemeshes[Taubin1995;Meyeretal.2003].Onecanuseanyofthesetocomputethecurvatureofameshatavertexv.LetthecurvaturemapCdefineamapfromeachvertexofameshtoitsmeancurvature,i.e.letC(v)denotethemeancurvatureofvertexv.WeuseTaubin[1995]’smethodforcurvaturecom-putation.LettheneighborhoodN(v,)foravertexv,bethesetofpointswithinadistance.Onecanconsiderseveralfunctionstodefinetheneighborhood,suchasthegeodesicortheEuclidean.WehavetriedbothandfoundthattheEuclideandis-tancegaveusbetterresultsandthatiswhatweusehere.Thus,N(v,)={x|kx?vk<,xisameshpoint}.LetG(C(v),)de-notetheGaussian-weightedaverageofthemeancurvature.Wecomputethisas:

Figure5:Images(a)–(e)showthesaliencyatscalesof2,3,4,5,and6.Image(f)showsthefinalmeshsaliencyafteraggregat-ingthesaliencyovermultiplescales.Here,is0.3%ofthelengthofthediagonaloftheboundingboxofthemodel.S(v)=|G(C(v),)?G(C(v),∑G(C(v),)=∑

C(x)exp[?kx?vk2/(2exp[?kx?vk2/(2

ofavertexvatascaleleveliasSi(v):Notethatwiththeaboveformulation,weareassumingacut-offfortheGaussianfilteratadistance2.WecomputethesaliencyS(v)ofavertexvastheabsolutedifferencebetweentheGaussian-usethestandarddeviationforthecoarsescaleastwicethatofthefinescale:Figure4:MeshSaliencyComputation:Wefirstcomputemeancur-vatureatmeshvertices.Foreachvertex,saliencyiscomputedasthedifferencebetweenmeancurvaturesfilteredwithanarrowandabroadGaussian.ForeachGaussian,wecomputetheGaussian-weightedaverageofthecurvaturesofverticeswithinaradius2,whereisGaussian’sstandarddeviation.Wecomputesaliencyatdifferentscalesbyvarying.Thefinalsaliencyistheaggregateofthesaliencyatallscaleswithanon-linearnormalization.

Si(v)=|G(C(v),i)?G(C(v),where,iisthestandarddeviationoftheGaussianfilterati.Foralltheresultsinthispaperwehaveusedfivescalesi{2,3,4,5,6},whereis0.3%ofthelengthofthediagonaloftheboundingboxofthemodel. Figure6:WeshowmeshsaliencyfortheCyberwareDinosaurmodel(a)infigure(c)andfortheCyberwareIsismodel(b)infig-ure(d).Warmercolors(redsandyellows)showhighsaliencyandcoolercolors(greensandblues)showlowsaliency. 99%98%98.5%99%(346K(3.5K(6.9K(5.2K(3.5KSimplificationby99%98%98.5%99%(3.5K(6.9K(5.2K(3.5KFigure7:SimplificationresultsfortheStanfordArmadillo:(a)showssimplifiedmodelsusingQslimand(b)showsdifferentlevelsofsimplificationusingsaliency.Thethreerightcolumnsshowthezoomed-infaceoftheArmadillo.Theeyesandthenosearep withourmethodwhilethebumpsonthelegsaresmoothedfaster.ForcombiningsaliencymapsSiatdifferentscales,weapplyanon-linearsuppressionoperatorSsimilartotheoneproposedbyIttietal.[1998].Thissuppressionoperatorpromotessaliencymapswithasmallnumberofhighpeaks(Figure5(e))whilesuppressingsaliencymapswithalargenumberofsimilarpeaks(Figure5(a)).Thus,non-linearsuppressionhelpsusinreducingthenumberofsalientpoints.Ifwedonotusesuppression,wegetfartoomanypressionhelpstodefinewhatmakessomethingunique,andthere-forepotentiallysalient.ForeachsaliencymapSi,wefirstnormal-izeSi.Wethencompute umsaliencyvalueMiandaveragemˉiofthelocal aexcludingtheglobal umatthatscale.Finally,wemultiplySibythefactor(Mi?mˉi)2.ThefinalmeshsaliencySiscomputedbyaddingthesaliencyatallscalesafterapplyingthenon-linearnormalizationofsuppres-sion:S=∑iS(Si)SalientThereisalargeandgrowingbodyofliteratureonsimplificationofmeshesusingadiversesetoferrormetricsandsimplificationop-erators[Luebkeetal.2003].Severalsimplificationapproachesuseestimatesofmeshcurvaturetoguidethesimplificationprocessandachievehighgeometricfidelityforagiventrianglebudget[Turk1992;Kimetal.2002].Othersimplificationapproaches,suchasQSlim[GarlandandHeckbert1997],useerrormetricsthatwhilenotdirectlycomputingcurvature,arerelatedtocurvature[Heck-bertandGarland1999].Curvaturehasalsobeendirectlyusedtoidentifysalientregionsonmeshes.WatanabeandBelyaev[2001]classifyextremaoftheprincipalcurvaturesassalientfeaturesandpreservethembetterduringsimplification.Theirmethodhowever,doesnotuseacenter-surroundmechanismtoidentifyregionsonameshthataredifferentfromtheirlocalcontext.

Forevaluatingtheeffectivenessofourmeshsaliencymethod,wehavemodifiedthequadrics-basedsimplificationmethod(Qslim)ofGarlandandHeckbert[1997]byweightingthequadricswithmeshsaliency.However,itshouldbeequallyeasytointegrateourmeshsaliencywithanyothermeshsimplificationscheme.GarlandandHeckbert’smethodsimplifiesameshbyrepeatedlycontractingver-texpairsorderedbyincreasingquadricerrors.LetPbethesetofplanesoftrianglesincidentatavertexv,wheretheplanep∈definedbytheequationax+by+cz+d=0,a2+b2+c2=1,representedas(abcd)T.Thenthequadricfortheplanepis Figure8:Weshowthesaliency-basedweightsandthequalityofthe99%simplification(3.5Ktriangles)fortheStanfordArmadillomodelforthreechoicesofthesimplificationweights:(a)theoriginalmeshsaliency(W=S)(b)theamplifiedmeshsaliency(W=AS),and(c)thesmoothedandamplifiedmeshsaliency(W=A(G(S,3))). Original(606Ktris) 4Ktriangles 4Ktriangles 2Ktriangles (SmoothShading)Simplificationby 4Ktriangles 4Ktriangles 2Ktriangles (SmoothShading)Figure9:SimplificationresultsfortheCyberwareMale:(a)showssimplificationsbyGarlandandHeckbert’smethod,and(b)showssimplificationsbyourmethodusingsaliency.Theeyes,nose,ears,andmoutharepbetterwithourmethod.finedasQp=ppT.TheydefinetheerrorofvwithrespecttopasthesquareddistanceofvtopwhichiscomputedbyvTQpv.ThequadricQofvisthesumofallthequadricsofneighboringplanes:Q=∑p∈PQp.Aftercomputingquadricsofallvertices,theycomputetheoptimalcontractionpointvˉforeachpair(vi,vj)whiinimizesthequadricerrorvˉT(Qi+Qj)vˉwhereQiandQjarequadricsofviandvj,respectively.ThealgorithmiterativelycontractsthepairwiththeminimumcontractioncostvˉT(Qi+Qj)vˉ.Afterapairiscontracted,thequadricforthenewpointvˉiscom-putedsimplybyaddingthetwoquadricsQi+Qj.WeguidetheorderofsimplificationcontractionsusingaweightmapWderivedfromthemeshsaliencymapS.Wehavefoundthatusingthesimplificationweightsbasedonanon-linearamplifi-cationofthesaliencygivesusg

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論