![從音頻文件中提取音樂(lè)特征在Matlab工具箱中文翻譯_第1頁(yè)](http://file3.renrendoc.com/fileroot_temp3/2022-3/4/441ae816-a344-4fdb-8ed1-8851c000e9ec/441ae816-a344-4fdb-8ed1-8851c000e9ec1.gif)
![從音頻文件中提取音樂(lè)特征在Matlab工具箱中文翻譯_第2頁(yè)](http://file3.renrendoc.com/fileroot_temp3/2022-3/4/441ae816-a344-4fdb-8ed1-8851c000e9ec/441ae816-a344-4fdb-8ed1-8851c000e9ec2.gif)
![從音頻文件中提取音樂(lè)特征在Matlab工具箱中文翻譯_第3頁(yè)](http://file3.renrendoc.com/fileroot_temp3/2022-3/4/441ae816-a344-4fdb-8ed1-8851c000e9ec/441ae816-a344-4fdb-8ed1-8851c000e9ec3.gif)
![從音頻文件中提取音樂(lè)特征在Matlab工具箱中文翻譯_第4頁(yè)](http://file3.renrendoc.com/fileroot_temp3/2022-3/4/441ae816-a344-4fdb-8ed1-8851c000e9ec/441ae816-a344-4fdb-8ed1-8851c000e9ec4.gif)
![從音頻文件中提取音樂(lè)特征在Matlab工具箱中文翻譯_第5頁(yè)](http://file3.renrendoc.com/fileroot_temp3/2022-3/4/441ae816-a344-4fdb-8ed1-8851c000e9ec/441ae816-a344-4fdb-8ed1-8851c000e9ec5.gif)
版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、 Proc. of the 10th Int. Conference on Digital Audio Effects (DAFx-07), Bordeaux, France, September 10-15, 2007A MATLAB TOOL BOX FOR MUSICAL FEATURE EXTRA- CTION FROM AUDIOOlivier Lartillot, Petri ToiviainenUniversity of JyväskyläFinlandlartillocampus.jyu.fiAbstract: We present MIRtoolbox,
2、an integrated set of functions written in Matlab, dedicated to the extraction of musical features from audiofiles. The design is based on a modular framework: the differential gorithms are decomposed into stages, formalized using a minimal set of elementary mechanisms, and integrating different vari
3、ants proposed by alternative approaches including new strategies we have developed , that users can select and parametrize.This paper offers an overview of the set of features, related among others, to timbre, tonality, rhythm or form, that can be extracted with MIRtoolbox. Four particular analyses
4、are provided as examples. The toolbox also includes functions for statistical analysis, egmentation and clustering. Particular attention has been paid to the design of a syntax that offers both simplicity of use and transparent adaptiveness to a multiplicity of possible input types. Each feature ext
5、raction method can accept as argument an audiofile, or any preliminary result from intermediary stages of the chainof operations. Also the same syntax can be used for analyses of single audio files, batches of files, series of audio segments, multichannel signals, etc. For that purpose, the data and
6、 methods of the toolbox are organised in an object-oriented architecture.1. MOTIVATION AND APPROACHMIRToolbox is a Matlab toolbox dedicated to the extraction of musically-related features from audio recordings. It has been designedin particular with the objective of enabling the computation of a lar
7、ge range of features from databases of audio files, that can be applied to statistical analyses.Few softwares have been proposed in this area. The most important one, Marsyas 1, provides a general architecture for connectingaudio, soundfiles, signal processing blocks and machine learning (see sectio
8、n 5 for more details). One particularity of our own approach relies in the use of the Matlab computing environment, which offers good visual isation capabilities and gives accessto a large variety of other toolboxes. In particular, the MIRToolbox makes use of functions available in recommended publi
9、c-domain toolboxes such as the Auditory Toolbox 2, NetLab 3, or SOM toolbox 4. Other toolboxes, such as the Statistics toolbox or the Neural Network toolbox from MathWorks, can be directly used forfurther analyses of the features extracted by MIRToolbox without having to export the data from one sof
10、tware to another.Such computational framework, because of its general objectives,could be useful to the research community in Music Information Retrieval (MIR), but also for educational purposes. For that reason, particular attention has been paid concerning the ease of use of the toolbox. In partic
11、ular, complex analytic processes can be designed using a very simple syntax, whose expressive power comes from the use of an object-oriented paradigm.The different musical features extracted from the audio files are highly interdependent: in particular, as can be seen in figure 1,some features are b
12、ased on the same initial computations. In order to improve the computational efficiency, it is important to avoid redundant computations of these common components. Each of these intermediary components, and the final musical features, are therefore considered as building blocks that can been freely
13、 articulatedone with each other. Besides, in keeping with the objective of optimal ease of use of the toolbox, each building block has been conceived in a way that it can adapt to the type of input data.For instance, the computation of the MFCCs can be based on the waveform of the initial audio sign
14、al, or on the intermediary representations such as spectrum, or mel-scale spectrum (see Fig. 1).Similarly, autocorrelation is computed for different range of delays depending on the type of input data (audio waveform, envelope, spectrum). This decomposition of all the set of feature extractional gor
15、ithms into a common set of building blocks has the advantage of offering a synthetic overview of the different approache studied in this domain of research.2. FEATURE EXTRACTION2.1. Feature overviewFigure 1 shows an overview of the main features implemented inthe toolbox. All the different processes
16、 start from the audio signal(on the left) and form a chain of operations proceeding to right.The vertical disposition of the processes indicates an increasing order of complexity of the operations, from simplest computation (top) to more detailed auditory modelling (bottom).Each musical feature is r
17、elated to one of the musical dimensions traditionally defined in music theory. Boldface characters highlight features related to pitch, to tonality (chromagram, keystrength and key Self-Organising Map, or SOM) and to dynamics (Root Mean Square, or RMS, energy). Bold italics indicate features related
18、 to rhythm, namely tempo, pulse clarity and fluctuation.Simple italics highlight a large set of features that can be associated to timbre. Among them, all the operators in grey italics can be in fact applied to many others different representations: for instance, statistical moments such as centroid
19、, kurtosis, etc.,can be applied to either spectra, envelopes, but also to his to gramsbased on any given feature.One of the simplest features, zero-crossing rate, is based on asimple description of the audio waveform itself: it counts the numberof sign changes of the waveform. Signal energy is compu
20、ted using root mean square, or RMS 5. The envelope of the audio signal offers timbral characteristics of isolated sonic event FFT-based spectrum can be computed along the frequency domainor along Mel-bands, with linear or decibel energy scale, andapplying various windowing methods. The results can b
21、e multiplied with diverse resonance curves in order to highlight different aspects such as metrical pulsation (when computing the FFT of envelopes) or fluctuation 6.Many features can be derived from the FFT: Basic statistics of the spectrum gives some timbral characteristics (such as spectral centro
22、id, roll-off 5, brightness, flatness, etc.). The temporal derivative of spectrum gives the spectral flux. An estimation of roughness, or sensory dissonance, can beassessed by adding the beating provoked by each couple of energy peaks in the spectrum 7. A conversion of the spectrum in a Mel-scale can
23、 lead to thecomputation of Mel-Frequency Cepstral Coefficients (MFCC)(cf. example 2.2), and of fluctuation6. Tonality can also be estimated (cf. example 2.3).The computation of the autocorrelation can use diverses normalization strategies, and integrates the improvement proposed by Boersma 8 in orde
24、r to compensate the side-effects due tothe windowing.Resonance urve are also available here.Autocorre lation can be generalized through a compression of the spectral representation9.The estimation of pitch is usually based on spectrum, autocorrelation,or cepstrum, or a mixture of these strategies 10
25、. A distinct approach consists of designing a complete chain of processes based on the modelling of auditory perception of sound and music 2 (circled in Figure 1). This approach can be used inparticular for the computation of rhythmic pulsation (cf. example2.4).2.2. Example: Timbre analysisOne commo
26、n way of describing timbre is based on MFCCs 11,2. Figure 2 shows the diagram of operations. First, the audiosequence is loaded (1), decomposed into successive frames (2),which are then converted into the spectral domain, using the mirspectrum function (3). The spectra are converted from the frequen
27、cy domain to the Mel-scale domain: the frequencies are rear rearrangedin to 40 frequency bands called Mel-bands1. The envelope of the Mel-scale spectrum is described with the MFCCs, which are obtained by applying the Discrete Cosine Transform to the Melscale spectrum. Usually only a restricted numbe
28、r of them (for instancethe 13 first ones) are selected (5).a=miraudio(audiofile.wav) (1)f=mirframe(a) (2)s=mirspectrum(f) (3)m=mirspectrum(s,Mel) (4)c=mirmfcc(s,Rank,1:13) ( 5)The computation can be carried in a window sliding through the audio signal (this corresponded to the code line 1), resultin
29、gin a series of MFCC vectors, one for each successive frame, that can be represented column-wise in a matrix. Figure 2 shows anexample of such matrix. The MFCCs do not convey very intuitive meaning perse, but are generally applied to distance computation between frames, and therefore to segmentation
30、 tasks (cf. paragrapn2.5).The whole process can be executed in one single line by calling directly the mirmfcc function with the audio input as argument:mirmfcc(f,Rank,1:13) (6)2.3. Example: Rhythm analysisOne common way of estimating the rhythmic pulsation, describedin figure 6, is based on auditor
31、y modelling 5. The audio signal is first decomposed into auditory channels using a bank of filters. Diversetypes of filterbanks are proposed and the number of channels can be changed, such as 20 for instance (8). The envelope of each channel is extracted (9)2. As pulsation is generally related to in
32、crease of energy only, the envelopes are differentiated, half-waverectified, before being finally summed together again (10). This gives a precise description of the variation of energy produced by each note event from the different auditory channels.After this onset detection, the periodicity is es
33、timated through autocorrelation (12)3. However, if the tempo varies throughout the piece, an autocorrelation of the whole sequence will not show clear periodicities. In such cases it is better to compute the auto for a frame decomposition (11)4. This yields a periodo gram that highlights the differe
34、nt periodicities, as shown in figure 6. In order to focus on the periodicities that are more perceptible, the periodogram is filtered using a resonance curve 16 (12), after which the best tempos are estimated through peak picking (13),and the results are converted into beat per minutes (14). Due to
35、the difficulty of choosing among the possible multiples of the tempo,several candidates (three for instance) may be selected for each frame, and a his to gram of all the candidates for all the frames,called periodicity histogram, can be drawn (15).fb=mirfilterbank(a,20) (8)e=mirenvelope(fb,Diff,Half
36、wave) (9)s=mirsum(e) (10)fr=mirframe(s,3,1) (11)ac=mirautocor(fr,Resonance) (12)p=mirpeaks(ac,Total,1,NoEnd) (13)t=mirtempo(p) (14)h=mirhisto(t) (15)The whole process can be executed in one single line by calling directly the mirtempo function with the audio input as argument:mirtempo(a,Frame) (16)I
37、n this case, the different options available throughout the processcan directly be specified as argument of the tempo function. Forinstance, a computation of a frame-based tempo estimation, with aselection of the 3 best tempo candidates in each frame, a range of admissible tempi between 60 and 120 b
38、eats per minute, an estimationstrategy based on a mixture of spectrum and autocorrelation applied on the spectral flux will be executed with the syntax:mirtempo(a,Frame,Total,3,Min,60,Max,120,Spectrum,Autocor,SpectralFlux) (17)2.4. SegmentationMore elaborate tools have also been implemented that can
39、 carry out higher-level analyses and transformations. In particular, audiofiles can be automatically segmented into a series of homogeneous sections, through the estimation of temporal disconti uities along diverse alternative features such as timbre in particular 17. First the audio signal is decom
40、posed into frames (18) and one chosenfeature, such as MFCC (19), is computed along these frames. The feature-based distances between all possible frame pairs are stored in a similarity matrix (20). Convolution along the maindiagonal of the similarity matrix using a Gaussian checkerboard kernelyields
41、 a novelty curve that indicates the temporal locations of significant textural changes (21).Peak detection applied to the novelty curve returns the temporal position of feature discontinuities(22) that can be used for the actual segmentation of the audio sequence(23)5.fr=mirframe(a) (18)fe=mirmfcc(f
42、r) (19)sm=mirsimatrix(fe) (20)nv=mirnovelty(sm) (21)ps=mirpeaks(nv) (22)sg=mirsegment(a,ps) (23)The whole segmentation process can be executed in one singleline by calling directly the mirsegment function with the audioinput as argument:mirsegment(a,Novelty) (25)By default, the novelty curve is base
43、d on MFCC, but other features can be selected as well using an additional option:mirsegment(a,Novelty,Spectrum) (26)A second similarity matrix can be computed, in order to showthe distance according to the same feature than the one used for the segmentation between all possible segment pairs (28).6f
44、esg=mirmfcc(sg) (27)smsg=mirsimatrix(fesg) (28)2.5. Data analysisThe toolbox includes diverse tools for data analysis, such as a peak extractor, and functions that compute histograms, entropy, zero crossingrates, irregularity or various statistical moments (centroid,spread, skewness, kurtosis, flatn
45、ess) on data of various types, suchas spectrum, envelope or histogram.The mirpeaks functions can accept any data returned by anyother function of the MIRtoolbox and can adapt to the different kind of data of any number of dimensions. In the graphical representation of the results, the peaks are auto
46、matically located onthe corresponding curves (for 1D data) or bit-map images (for 2D data).The mirpeaks functions offers alternative possible heuristics.It is possible to define a global threshold that peaks mustexceed for them to be selected. We have designed a new strategy of peak selection, based
47、 on a notion of contrast, discarding peaksthat are not sufficiently contrastive (based on a certain threshold)with the neighbouring peaks. This adaptive filtering strategy henceadapts to the local particularities of the curves. Its articulation with other more conventional thresholding strategies le
48、ads to anefficient peak picking module that can be applied throughout theMIRtoolbox.Supervised classification of musical samples can also be performed,using techniques such as K-Nearest Neighbours or Gaussian Mixture Model. One possible application is the classification of audio recordings into musi
49、cal genres.3. DESIGN OF THE TOOLBOX3.1. Data encapsulationAll the data returned by the functions in the toolbox are encapsulated into types objects. The default display method associated toall these objects is a graphical display of the corresponding curves. In this way, when the display of the valu
50、es of a given analysis is requested,what is printed is not a listing of long vectors or matrices,but rather a correctly formatted graphical representation.The actual data matrices associated to those data can be obtained by calling a method called mirgetdata, which constructsthe simplest possible da
51、ta structure associated to the data (cf. paragraph4.1).3.2. Frame analysisFrame-based analyses (i.e., based on the use of a sliding window) can be specified using two alternative methods. The first method is based on the use of the mirframefunction, which decomposes an audiosignal into successive fr
52、ames. Optio nal arguments canspecify the frame size (in seconds, by default), and the hop factor (between 0 and 1, by default). For instance, in the following code(line 29), the frames have a size of 50 milliseconds and are half overlapped. The results of that function could then be directly sentas
53、input of any other function of the toolbox (30):f=mirframe(a,.05,.5) (29)mirtempo(f) (30)Yet this first method does not work correctly for instance when dealing with tempo estimation as described in section 2.4. Following this first method, as shown in figure 7, the frame decompositionis the first s
54、tep performed in the chain of processes. As a result,the input of the filterbank decomposition is a series of short frames,which induces two main difficulties. Firstly, in order to avoid the presence of undesirable transitory state at the beginning of each filtered frame, the initial state of each filter would need to be tuned depending on the state of the filter at one particular instant of the previousframe (depending of the ove rlapping factor). Secondly, the demultiplici tion of the redundancies of the frame decomposition(if the frames are overlapped) thro
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 京能集寧二期擴(kuò)建2×660MW熱電聯(lián)產(chǎn)工程節(jié)能報(bào)告的審查意見(jiàn)
- 淮安磁性金屬材料項(xiàng)目可行性研究報(bào)告
- 養(yǎng)殖鵝合同范本
- 農(nóng)業(yè)水果收購(gòu)合同范例
- 主播發(fā)貨合同范本
- 保底銷量合同范本
- PP凈化塔工程合同范例
- 倉(cāng)庫(kù)經(jīng)營(yíng)餐飲合同范例
- 2024年安全準(zhǔn)入考試復(fù)習(xí)試題含答案
- 養(yǎng)牛銷售合同范本
- 2024年山東泰安市泰山財(cái)金投資集團(tuán)有限公司招聘筆試參考題庫(kù)含答案解析
- 上海天文館分析
- 中醫(yī)睡眠養(yǎng)生中心方案
- 生活中的邏輯學(xué)
- 大學(xué)生返家鄉(xiāng)社會(huì)實(shí)踐報(bào)告
- 初中生物中考真題(合集)含答案
- 《醫(yī)學(xué)免疫學(xué)實(shí)驗(yàn)》課件
- C139客戶開(kāi)發(fā)管理模型
- 中考英語(yǔ)閱讀理解(含答案)30篇
- GB/T 5019.5-2023以云母為基的絕緣材料第5部分:電熱設(shè)備用硬質(zhì)云母板
- 《工傷保險(xiǎn)專題》課件
評(píng)論
0/150
提交評(píng)論