下載本文檔
版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、video special effectspeng huang2.1.2 object-space nparmeier was the first one who produced painterly animations from object-space scenes53.he triangulated surfaces in object-space and distributed strokes over each triangle in proportion to its area. since his initial work, many object-space npar sys
2、tems have been presented. halls q-maps29(a q-map is a 3d texture which adapts to the intensity of light to give the object in the image a 3d look, for example, more marks are made where an object is darker) may be applied to create coherent pen-and-ink shaded animations.a system capable of rendering
3、 object-space eometries in a sketchy style was outlined by curtis15, and operates by tracing the paths of particles traveling stochastically around contours of a depth image generated from a 3d object. see figure 2.3 for some examples.in addition, most modern graphical modeling packages(3d studio ma
4、x!,maya,xsi soft-image) support plug-ins which offer the option of rendering object-space scenes to give a flat shaded, cartoon-like appearance.2.1.3 image-space nparmost npar systems in image-space are still based on static painterly rendering techniques,brushing strokes frame by frame and trying t
5、o avoid unappealing swimming which distractsthe audience from the content of the animation. liwinowicz extends his static method and makes use of optical flow to estimate a motion vector field to translate the strokes painted on the first frame to successive frames47. a similar method is employed by
6、 kovacs and sziranyi42. a simpler solution is proposed by hertzmann33, who differences consecutive frames of video, re-painting only those areas which have changed above some global(userdefined) threshold. hays and essas approach32 builds on and improves these techniques by using edges to guide pain
7、terly refinement. see figure 2.4 for some examples. in their current work, they are looking into studying region-based methods to extend beyond pixels to cell-based renderings, which implies the trend from low-level analysis to higher-levelscene understanding. we also find various image-space tools
8、which are highly interactive to assist users in the process of creating digital non-photorealistic animations. fekete et al. describe a system23 to assist in the creation of line art cartoons. agarwala proposes an interactive system2 that allows children and others untrained in “cel animation” to cr
9、eate 2d cartoons from images and video. users have to hand-segment the first image, and active contours(snakes) are used to track the segmentation boundaries from frame to frame. it is labor intensive(usersneed to correct the contours every frame), unstable(due to susceptibility of snakes to local m
10、inima and tracking fails under occlusion) and limited to video material with distinct objects and well-defined edges. another technique is called “advanced rotoscoping” by the computer graphics community, which requires artists to draw a shape in key-frames, and then interpolate the shape over the i
11、nterval between key-frames a process referred to as “in-betweening” by animators. the film “waking life”26 used this technique. see figure 2.5 for some examples.npar techniques in image-space as well as commercial video effects software, such as adobe premier, which provide low-level effects(i.e slo
12、w-motion, spatial warping, and motion blur etc.), fail to do a high-level video analysis and are unable to create more complicated visual effects(e.g. motion emphasis). lake et al. present techniques for emphasizing motion of cartoon objects by introducing geometry into the cartoon scene43. however,
13、 their work is limited to object-space, avoiding the complex high-level video analysis, and their “motion lines” are quite simple. in their current work, they are trying to integrate other traditional cartoon effects into their system. collomosse and hall first introduce high-level computer vision a
14、nalysis to npar in “videopaintbox”12. they argue that comprehensive video analysis should be the first step in the artistic rendering(ar) process; salient information(such as object boundaries or trajectories) must be extracted prior to representation in an artistic style. by developing novel comput
15、er vision techniques for ar, they are able to emphasize motion using traditional animation cues44 such as streak-lines, anticipation and deformation. their unique contribution is to build a video based npr system which can process over an extended period of time rather than on a per frame basis. thi
16、s advance allows them to analyze trajectories, make decisions regarding occlusions and collisions and do motion emphasis. in this work we will also regard video as a whole rather than the sum of individual frames. however, their segmentation in “computer vision component” suffers labor intensity, si
17、nce users have to manually identify polygons, which are “shrink wrapped” to the features edge contour using snake relaxation72 before tracking. and their tracking is based on the assumption that contour motion may be modeled by a linear conformal affine transform(lcat) in theimage plane. we try to u
18、se a more automatic segmentation and non-rigid region tracking to improve the capability of video analysis. see figure 2.6 for some examples.another high-level video-based npar system is provided by wang et al.69. they regard video as a 3d lattice(x,y,t) and then implement spatio-temporal segmentati
19、on of homogeneous regions using mean shift14 or improved mean shift70 to get volumes of contiguous pixels with similar colour. users have to define salient regions by manually sketching on key-frames and the system thus automatically generates salient regions per frame. this naturally build the corr
20、espondence between successive frames, avoiding non-rigid region tracking. their rendering is based on mean shift guided interpolation. the rendering style is limited to several approaches, such as changing segment colour and placing strokes and ignores motion analysis and motion emphasis. our system
21、 segments key-frame using 2d mean shift, identifies salient regions and then tracks them over the whole sequence. we extract motion information from the results and then do motion emphasis. see figure 2.6 for some examples.2.1.4 some other npar techniquesbregler et la present a technique called “car
22、toon capture and retargeting” in 7 which is used to track the motion from traditional animated cartoon and then retarget it onto different output media, including 2d cartoon animation, 3d cg models, and photo-realistic output. they describe vision-based tracking techniques and new modeling technique
23、s. our research tries to borrow this idea to extract motion information, from general video rather than a cartoon, using different computer vision algorithms and then represent this in different output media. see figure 2.7 for some examples.2.1.5 npr application on sports analysisdue to a more and
24、more competitive sports market, sports media companies try to attract audiences by increasingly providing more special or more specific graphics and effects. sports statistics are often graphically presented on tv during sporting events such as the percentage of time in a soccer game that the ball h
25、as been in one half compared to the other half. these statistics are collected in many ways both manually and automatically. it is desirable to be able to generate many statistics directly from the video of the game. there are many products in the broadcast environment that provide this capability.t
26、he telestrator78 is a simple but efficient tool to allow users to draw lines within a 2d video image by using a mouse. the product is sold as a dedicated box with a touch screen, a video input and video outputs the video with the graphics produced. typically, four very simple controls such as draw a
27、rrow, draw dotted line etc. are provided.視頻特技黃鵬2.1.2三維空間的nparmeier 第一個(gè)在繪畫上創(chuàng)造出三維感覺,他把物體按比例繪制到三維界面上實(shí)現(xiàn)在二維介質(zhì)上實(shí)現(xiàn)立體感,有了他的理論工作,許多三維空間的npar開始被研發(fā)出來。hall的q-maps(q-maps是一種利用光影制造出三維感覺的質(zhì)感,例如,很多物體都有黑色陰影)可以制作出互相密合著的鋼筆畫般的黑白動(dòng)畫。curtis提出一種利用幾何學(xué)有力表現(xiàn)立體空間的系統(tǒng),而且根據(jù)追蹤操作在附近隨機(jī)程序旅行的粒子的路徑深度圖像的等高線從而產(chǎn)生3d立體物體。如圖例2.3 除此之外,大部分的圖形工作界
28、面軟件(3d studio max!,maya,xsi soft-image)支持圖形操作帶來的三維立體感的插件。2.1.3二維空間的npar大部分的npar二維系統(tǒng)仍然使用靜圖顯示技術(shù),一種用畫筆一幀一幀的描繪還要避免丟幀帶來的跳動(dòng)感。liwinowicz發(fā)展了這種靜圖描繪方法,利用光學(xué)流程到估計(jì)運(yùn)動(dòng)矢量領(lǐng)域呈現(xiàn)筆劃著色的第一個(gè)幀上對(duì)應(yīng)連續(xù)的幀 47。kovacs 和sziranyi 使用了類似的方法。hertzmann33提出了一種簡(jiǎn)便的解決方法,他區(qū)分出不同連續(xù)的幀,只描繪出有動(dòng)作變化的局部而不是全部改變。hays 和 essa的方法通過精致的描繪邊緣改進(jìn)了技術(shù)??磮D例2.4 。在他們最
29、近的研究中,他們研究局部為主的方法來改進(jìn)以像素為單位顯示,從而暗示他們對(duì)那些由低到高分辨顯示的理解。我們也找到了許多帶有能夠有效幫助使用者模擬動(dòng)畫的交互式二維工具。fekete 等人描繪了一個(gè)能夠幫助創(chuàng)作卡通藝術(shù)形象的系統(tǒng)。agarwala提出了一個(gè)能允許孩子或未經(jīng)訓(xùn)練的人們通過圖片或視頻創(chuàng)造二維卡通形象的交互式系統(tǒng)。使用者必須手動(dòng)繪制出第一幅背景圖畫,然后再一幀一幀的分軌道的繪制活動(dòng)的形象。它屬于勞動(dòng)密集型,這種情況下制作出來的影片帶有不穩(wěn)定性,影像質(zhì)量受到限制。另一種工作在計(jì)算機(jī)圖形操作界面的技術(shù),它只需要藝術(shù)家在關(guān)鍵幀上繪制一個(gè)形象,計(jì)算機(jī)會(huì)使關(guān)鍵幀內(nèi)的形象活動(dòng)起來生成鮮活的動(dòng)畫。電影“
30、 喚醒生活”就是使用的這種技術(shù)。見圖例2.5 。npar應(yīng)用到二維視頻特效處理軟件,例如adobe premiere,它可以提供初級(jí)特效處理(慢動(dòng)作、空間扭曲、運(yùn)動(dòng)模糊)卻不能提供做高級(jí)視頻分析和復(fù)雜的虛擬特效處理(如運(yùn)動(dòng)強(qiáng)調(diào))。lake等人通過在卡通場(chǎng)景中引入幾何學(xué)實(shí)現(xiàn)卡通形象的運(yùn)動(dòng)強(qiáng)調(diào)技術(shù)。然而,他們的成果由于運(yùn)動(dòng)軌跡過于簡(jiǎn)單而不能應(yīng)用于三維空間和高級(jí)視頻分析。在他們最近的研究中他們嘗試將傳統(tǒng)的卡通特效嵌入到他們的系統(tǒng)當(dāng)中。待添加的隱藏文字內(nèi)容2collomosse 和 hall在“影像繪制箱” 中率先將高級(jí)計(jì)算機(jī)操作系統(tǒng)與npar結(jié)合。他們主張廣泛的影像分析應(yīng)該是藝術(shù)再現(xiàn)過程的第一步。顯著的信息(例如物體的輪廓、軌道)必須首先在一種藝術(shù)風(fēng)格中提取再現(xiàn)。通過給ar發(fā)展新的計(jì)算機(jī)技術(shù),他們能夠利用傳統(tǒng)的動(dòng)畫開端技術(shù)實(shí)現(xiàn)運(yùn)動(dòng)強(qiáng)調(diào)。他們的獨(dú)特貢獻(xiàn)就是基于npr系統(tǒng)建立了一個(gè)能夠超出一定規(guī)定時(shí)間的視頻。這個(gè)進(jìn)步允許他們分析軌道并根據(jù)障礙物和碰撞系數(shù)制作運(yùn)動(dòng)強(qiáng)調(diào)。在他們的研究中我們?nèi)耘f將視頻看做一個(gè)整體而不是一個(gè)個(gè)獨(dú)立的幀。然而,他們的“計(jì)算機(jī)視覺成分”分割技
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2024保安服務(wù)合同(范本)公司保安合同范本
- 2024年丙丁雙方關(guān)于購(gòu)買房產(chǎn)合同標(biāo)的的協(xié)議書
- 2024年簡(jiǎn)單貨物運(yùn)輸合同格式
- 2024年度金融風(fēng)險(xiǎn)管理系統(tǒng)定制開發(fā)合同
- 2024合同補(bǔ)充協(xié)議
- 2024年協(xié)議離婚應(yīng)當(dāng)注意的要點(diǎn)
- 網(wǎng)吧轉(zhuǎn)讓合同范本
- 律師代理公司股票上市合同范本
- 2024日本留學(xué)租房合同簽訂須知
- 2024借款居間服務(wù)合同
- 2024江蘇省沿海開發(fā)集團(tuán)限公司招聘23人高頻難、易錯(cuò)點(diǎn)500題模擬試題附帶答案詳解
- 2024年計(jì)算機(jī)二級(jí)WPS考試題庫(kù)380題(含答案)
- 22G101三維彩色立體圖集
- 大學(xué)生安全文化智慧樹知到期末考試答案章節(jié)答案2024年中南大學(xué)
- 建筑施工安全生產(chǎn)治本攻堅(jiān)三年行動(dòng)方案(2024-2026年)
- 人教版小學(xué)英語單詞表(完整版)
- DL-T 1476-2023 電力安全工器具預(yù)防性試驗(yàn)規(guī)程
- 國(guó)家開放大學(xué)《心理健康教育》形考任務(wù)1-9參考答案
- MOOC 法理學(xué)-西南政法大學(xué) 中國(guó)大學(xué)慕課答案
- 用友華表伙伴商務(wù)手冊(cè).
- 大學(xué)生健康人格與心理健康PPT課件
評(píng)論
0/150
提交評(píng)論