



文檔簡介
1、video special effectspeng huang2.1.2 object-space nparmeier was the first one who produced painterly animations from object-space scenes53.he triangulated surfaces in object-space and distributed strokes over each triangle in proportion to its area. since his initial work, many object-space npar sys
2、tems have been presented. halls q-maps29(a q-map is a 3d texture which adapts to the intensity of light to give the object in the image a 3d look, for example, more marks are made where an object is darker) may be applied to create coherent pen-and-ink shaded animations.a system capable of rendering
3、 object-space eometries in a sketchy style was outlined by curtis15, and operates by tracing the paths of particles traveling stochastically around contours of a depth image generated from a 3d object. see figure 2.3 for some examples.in addition, most modern graphical modeling packages(3d studio ma
4、x!,maya,xsi soft-image) support plug-ins which offer the option of rendering object-space scenes to give a flat shaded, cartoon-like appearance.2.1.3 image-space nparmost npar systems in image-space are still based on static painterly rendering techniques,brushing strokes frame by frame and trying t
5、o avoid unappealing swimming which distractsthe audience from the content of the animation. liwinowicz extends his static method and makes use of optical flow to estimate a motion vector field to translate the strokes painted on the first frame to successive frames47. a similar method is employed by
6、 kovacs and sziranyi42. a simpler solution is proposed by hertzmann33, who differences consecutive frames of video, re-painting only those areas which have changed above some global(userdefined) threshold. hays and essas approach32 builds on and improves these techniques by using edges to guide pain
7、terly refinement. see figure 2.4 for some examples. in their current work, they are looking into studying region-based methods to extend beyond pixels to cell-based renderings, which implies the trend from low-level analysis to higher-levelscene understanding. we also find various image-space tools
8、which are highly interactive to assist users in the process of creating digital non-photorealistic animations. fekete et al. describe a system23 to assist in the creation of line art cartoons. agarwala proposes an interactive system2 that allows children and others untrained in “cel animation” to cr
9、eate 2d cartoons from images and video. users have to hand-segment the first image, and active contours(snakes) are used to track the segmentation boundaries from frame to frame. it is labor intensive(usersneed to correct the contours every frame), unstable(due to susceptibility of snakes to local m
10、inima and tracking fails under occlusion) and limited to video material with distinct objects and well-defined edges. another technique is called “advanced rotoscoping” by the computer graphics community, which requires artists to draw a shape in key-frames, and then interpolate the shape over the i
11、nterval between key-frames a process referred to as “in-betweening” by animators. the film “waking life”26 used this technique. see figure 2.5 for some examples.npar techniques in image-space as well as commercial video effects software, such as adobe premier, which provide low-level effects(i.e slo
12、w-motion, spatial warping, and motion blur etc.), fail to do a high-level video analysis and are unable to create more complicated visual effects(e.g. motion emphasis). lake et al. present techniques for emphasizing motion of cartoon objects by introducing geometry into the cartoon scene43. however,
13、 their work is limited to object-space, avoiding the complex high-level video analysis, and their “motion lines” are quite simple. in their current work, they are trying to integrate other traditional cartoon effects into their system. collomosse and hall first introduce high-level computer vision a
14、nalysis to npar in “videopaintbox”12. they argue that comprehensive video analysis should be the first step in the artistic rendering(ar) process; salient information(such as object boundaries or trajectories) must be extracted prior to representation in an artistic style. by developing novel comput
15、er vision techniques for ar, they are able to emphasize motion using traditional animation cues44 such as streak-lines, anticipation and deformation. their unique contribution is to build a video based npr system which can process over an extended period of time rather than on a per frame basis. thi
16、s advance allows them to analyze trajectories, make decisions regarding occlusions and collisions and do motion emphasis. in this work we will also regard video as a whole rather than the sum of individual frames. however, their segmentation in “computer vision component” suffers labor intensity, si
17、nce users have to manually identify polygons, which are “shrink wrapped” to the features edge contour using snake relaxation72 before tracking. and their tracking is based on the assumption that contour motion may be modeled by a linear conformal affine transform(lcat) in theimage plane. we try to u
18、se a more automatic segmentation and non-rigid region tracking to improve the capability of video analysis. see figure 2.6 for some examples.another high-level video-based npar system is provided by wang et al.69. they regard video as a 3d lattice(x,y,t) and then implement spatio-temporal segmentati
19、on of homogeneous regions using mean shift14 or improved mean shift70 to get volumes of contiguous pixels with similar colour. users have to define salient regions by manually sketching on key-frames and the system thus automatically generates salient regions per frame. this naturally build the corr
20、espondence between successive frames, avoiding non-rigid region tracking. their rendering is based on mean shift guided interpolation. the rendering style is limited to several approaches, such as changing segment colour and placing strokes and ignores motion analysis and motion emphasis. our system
21、 segments key-frame using 2d mean shift, identifies salient regions and then tracks them over the whole sequence. we extract motion information from the results and then do motion emphasis. see figure 2.6 for some examples.2.1.4 some other npar techniquesbregler et la present a technique called “car
22、toon capture and retargeting” in 7 which is used to track the motion from traditional animated cartoon and then retarget it onto different output media, including 2d cartoon animation, 3d cg models, and photo-realistic output. they describe vision-based tracking techniques and new modeling technique
23、s. our research tries to borrow this idea to extract motion information, from general video rather than a cartoon, using different computer vision algorithms and then represent this in different output media. see figure 2.7 for some examples.2.1.5 npr application on sports analysisdue to a more and
24、more competitive sports market, sports media companies try to attract audiences by increasingly providing more special or more specific graphics and effects. sports statistics are often graphically presented on tv during sporting events such as the percentage of time in a soccer game that the ball h
25、as been in one half compared to the other half. these statistics are collected in many ways both manually and automatically. it is desirable to be able to generate many statistics directly from the video of the game. there are many products in the broadcast environment that provide this capability.t
26、he telestrator78 is a simple but efficient tool to allow users to draw lines within a 2d video image by using a mouse. the product is sold as a dedicated box with a touch screen, a video input and video outputs the video with the graphics produced. typically, four very simple controls such as draw a
27、rrow, draw dotted line etc. are provided.視頻特技黃鵬2.1.2三維空間的nparmeier 第一個在繪畫上創(chuàng)造出三維感覺,他把物體按比例繪制到三維界面上實現(xiàn)在二維介質上實現(xiàn)立體感,有了他的理論工作,許多三維空間的npar開始被研發(fā)出來。hall的q-maps(q-maps是一種利用光影制造出三維感覺的質感,例如,很多物體都有黑色陰影)可以制作出互相密合著的鋼筆畫般的黑白動畫。curtis提出一種利用幾何學有力表現(xiàn)立體空間的系統(tǒng),而且根據(jù)追蹤操作在附近隨機程序旅行的粒子的路徑深度圖像的等高線從而產生3d立體物體。如圖例2.3 除此之外,大部分的圖形工作界
28、面軟件(3d studio max!,maya,xsi soft-image)支持圖形操作帶來的三維立體感的插件。2.1.3二維空間的npar大部分的npar二維系統(tǒng)仍然使用靜圖顯示技術,一種用畫筆一幀一幀的描繪還要避免丟幀帶來的跳動感。liwinowicz發(fā)展了這種靜圖描繪方法,利用光學流程到估計運動矢量領域呈現(xiàn)筆劃著色的第一個幀上對應連續(xù)的幀 47。kovacs 和sziranyi 使用了類似的方法。hertzmann33提出了一種簡便的解決方法,他區(qū)分出不同連續(xù)的幀,只描繪出有動作變化的局部而不是全部改變。hays 和 essa的方法通過精致的描繪邊緣改進了技術??磮D例2.4 。在他們最
29、近的研究中,他們研究局部為主的方法來改進以像素為單位顯示,從而暗示他們對那些由低到高分辨顯示的理解。我們也找到了許多帶有能夠有效幫助使用者模擬動畫的交互式二維工具。fekete 等人描繪了一個能夠幫助創(chuàng)作卡通藝術形象的系統(tǒng)。agarwala提出了一個能允許孩子或未經(jīng)訓練的人們通過圖片或視頻創(chuàng)造二維卡通形象的交互式系統(tǒng)。使用者必須手動繪制出第一幅背景圖畫,然后再一幀一幀的分軌道的繪制活動的形象。它屬于勞動密集型,這種情況下制作出來的影片帶有不穩(wěn)定性,影像質量受到限制。另一種工作在計算機圖形操作界面的技術,它只需要藝術家在關鍵幀上繪制一個形象,計算機會使關鍵幀內的形象活動起來生成鮮活的動畫。電影“
30、 喚醒生活”就是使用的這種技術。見圖例2.5 。npar應用到二維視頻特效處理軟件,例如adobe premiere,它可以提供初級特效處理(慢動作、空間扭曲、運動模糊)卻不能提供做高級視頻分析和復雜的虛擬特效處理(如運動強調)。lake等人通過在卡通場景中引入幾何學實現(xiàn)卡通形象的運動強調技術。然而,他們的成果由于運動軌跡過于簡單而不能應用于三維空間和高級視頻分析。在他們最近的研究中他們嘗試將傳統(tǒng)的卡通特效嵌入到他們的系統(tǒng)當中。待添加的隱藏文字內容2collomosse 和 hall在“影像繪制箱” 中率先將高級計算機操作系統(tǒng)與npar結合。他們主張廣泛的影像分析應該是藝術再現(xiàn)過程的第一步。顯著的信息(例如物體的輪廓、軌道)必須首先在一種藝術風格中提取再現(xiàn)。通過給ar發(fā)展新的計算機技術,他們能夠利用傳統(tǒng)的動畫開端技術實現(xiàn)運動強調。他們的獨特貢獻就是基于npr系統(tǒng)建立了一個能夠超出一定規(guī)定時間的視頻。這個進步允許他們分析軌道并根據(jù)障礙物和碰撞系數(shù)制作運動強調。在他們的研究中我們仍舊將視頻看做一個整體而不是一個個獨立的幀。然而,他們的“計算機視覺成分”分割技
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 【正版授權】 ISO/TS 9546:2024 EN Guidelines for security framework of information systems of third-party payment services
- 二零二五年度汽車消費貸款分款及還款計劃合同
- 2025年度材料運輸車輛維護保養(yǎng)合同
- 2025年度智能倉儲物流系統(tǒng)建設合同-@-3
- 城市供水保障措施計劃
- 急診醫(yī)療資源整合方案計劃
- 班主任指引學生逐夢之路計劃
- 注重細節(jié)提升工作質量計劃
- 借助故事提升小班情感認知計劃
- 班級評比機制的創(chuàng)新計劃
- 2024年醫(yī)療器械經(jīng)營質量管理規(guī)范培訓課件
- 中華人民共和國學前教育法-知識培訓
- 2023年新高考(新課標)全國2卷數(shù)學試題真題(含答案解析)
- GB/T 19228.1-2024不銹鋼卡壓式管件組件第1部分:卡壓式管件
- 2024年計算機二級WPS考試題庫380題(含答案)
- 教科版三年級下冊科學全冊完整課件
- 軌道交通安全專題培訓
- 物理化學完整版答案
- 白條豬的分割表
- 小直徑開敞式TBM遇到軟弱破碎圍巖的施工技術
- 節(jié)流孔板孔徑計算
評論
0/150
提交評論