基于BP神經(jīng)網(wǎng)絡(luò)的車型識別中英文翻譯_第1頁
基于BP神經(jīng)網(wǎng)絡(luò)的車型識別中英文翻譯_第2頁
基于BP神經(jīng)網(wǎng)絡(luò)的車型識別中英文翻譯_第3頁
基于BP神經(jīng)網(wǎng)絡(luò)的車型識別中英文翻譯_第4頁
基于BP神經(jīng)網(wǎng)絡(luò)的車型識別中英文翻譯_第5頁
已閱讀5頁,還剩68頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、湖 南 科 技 大 學(xué)智 能 控 制 理 論 論 文姓名:_學(xué)院:_班級:_學(xué)號:_License Plate Recognition Based On Prior KnowledgeAbstractIn this paper, a new algorithm based on improved BP (back propagation) neural network for Chinese vehicle license plate recognition (LPR) is described. The proposed approach provides a solution for th

2、e vehicle license plates (VLP) which were degraded severely. What it remarkably differs from the traditional methods is the application of prior knowledge of license plate to the procedure of location, segmentation and recognition. Color collocation is used to locate the license plate in the image.

3、Dimensions of each character are constant, which is used to segment the character of VLPs. The Layout of the Chinese VLP is an important feature, which is used to construct a classifier for recognizing. The experimental results show that the improved algorithm is effective under the condition that t

4、he license plates were degraded severelyVehicle License-Plate (VLP) recognition is a very interesting but difficult problem. It is important in a number of applications such as weight-and-speed-limit, red traffic infringement, road surveys and park security 1. VLP recognition system consists of the

5、plate location, the characters segmentation, and the characters recognition. These tasks become more sophisticated when dealing with plate images taken in various inclined angles or under various lighting, weather condition and cleanliness of the plate. Because this problem is usually used in real-t

6、ime systems, it requires not only accuracy but also fast processing. Most existing VLP recognition methods 2, 3, 4, 5 reduce the complexity and increase the recognition rate by using some specific features of local VLPs and establishing some constrains on the position, distance from the camera to ve

7、hicles, and the inclined angles. In addition, neural network was used to increase the recognition rate 6, 7 but the traditional recognition methods seldom consider the prior knowledge of the local VLPs. In this paper, we proposed a new improved learning method of BP algorithm based on specific featu

8、res of Chinese VLPs. The proposed algorithm overcomes the low speed convergence of BP neural network 8 and remarkable increases the recognition rate especially under the condition that the license plate images were degrade severely.Index Terms - License plate recognition, prior knowledge, vehicle li

9、cense plates, neural network.1. Neural Network Introduction ObjectiveAs you read these words you are using a complex biological neural network. You have a highly interconnected set of some 1011 neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons,a rich

10、 assembly of tissue and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience.Scientists have only just begun to understand how biological neural networks operate. It is generally und

11、erstood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.This leads to the following question: Although we have

12、 only a rudimentary understanding of biological neural networks, is it possible to construct a small set of simple artificial “neurons” and perhaps train them to serve a useful function? The answer is “yes.”This book, then, is about artificial neural networks.The neurons that we consider here are no

13、t biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon. Networks of these artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform useful functions. This

14、 book is about such neurons, the networks that contain them and their training. HistoryThe history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to develop concepts that we now take for granted. This history

15、 has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John Anderson and Edward Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction that puts the

16、paper in historical perspective.Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.At least two ingr

17、edients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept, a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical descripti

18、on. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they designed experiments to study its pumping action. These e

19、xperiments revolutionized our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathema

20、tics necessary for the reconstruction of images from computer-aided topography (CAT) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a useful CAT system.The history of neural networks has progressed through bo

21、th conceptual innovations and implementation developments. These advancements, however, seem to have occurred in fits and starts rather than by steady evolution.Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily o

22、f interdisciplinary work in physics, psychology and neurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation. The m

23、odern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts McPi43, who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of theneural network field.McCull

24、och and Pitts were followed by Donald Hebb Hebb49, who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a mechanism for learning in biological neurons.The first practical application of artificial neural networks c

25、ame in the late 1950s, with the invention of the perception network and associated learning rule by Frank Rosenblatt Rose58. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in

26、neural network research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Rosenblatt and the perception learning rule.)At about the same time, Bernard Widrow and Ted Hoff WiHo60 introduced a new learning algo

27、rithm and used it to train adaptive linear neural networks, which were similar in structure and capability to Rosenblatts perception. The Widrow Hoff learning rule is still in use today. (See Chapter 10 for more on Widrow-Hoff learning.)Unfortunately, both Rosenblatts and Widrows networks suffered f

28、rom the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert MiPa69. Rosenblatt and Widrow wereaware of these limitations and proposed new networks that would overcome them. However, they were not able to successfully modify their learning algorithms

29、to train the more complex networks.Many people, influenced by Minsky and Papert, believed that further research on neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment,caused many researchers to leave the field. For a decad

30、e neural network research was largely suspended. Some important work, however, did continue during the 1970s. In 1972 Teuvo Kohonen Koho72 and James Anderson Ande72 independently and separately developed new neural networks that could act as memories. Stephen Grossberg Gros76 was also very active du

31、ring this period in the investigation of self-organizing networks.Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which to experiment. During the 1980s both of these impediments were overcome, and researchin neural networks

32、increased dramatically. New personal computers andworkstations, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced. Two new concepts were most responsible for the rebirth of neural net works. The first was the use of statistical mechanics t

33、o explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfield Hopf82. The second key development of the 1980s was the backpropagation algo rithm for training multilayer perceptron networ

34、ks, which was discovered independently by several different researchers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland RuMc86. This algorithm was the answer to the criticisms Minsky and Papert had made in the 1960s. (See Chapters 11 and

35、 12 for a development of the backpropagation algorithm.)These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As

36、noted below, it is not clear where all of this will lead US.The brief historical account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in the neural network field has progressed. As one might note, the progress has

37、 not always been slow but sure. There have been periods of dramatic progress and periods when relatively little has been accomplished.Many of the advances in neural networks have had to do with new concepts, such as innovative architectures and training. Just as important has been the availability o

38、f powerful new computers on which to test these new concepts. Well, so much for the history of neural networks to this date. The real question is, What will happen in the next ten to twenty years? Will neural networks take a permanent place as a mathematical/engineering tool, or will they fade away

39、as have so many promising technologies? At present, the answer seems to be that neural networks will not only have their day but will have a permanent place, not as a solution to every problem, but as a tool to be used in appropriate situations. In addition, remember that we still know very little a

40、bout how the brain works. The most important advances in neural networks almost certainly lie in the future.Although it is difficult to predict the future success of neural networks, the large number and wide variety of applications of this new technology are very encouraging. The next section descr

41、ibes some of these applications. ApplicationsA recent newspaper article described the use of neural networks in literature research by Aston University. It stated that the network can be taught to recognize individual writing styles, and the researchers used it to compare works attributed to Shakesp

42、eare and his contemporaries. A popular science television program recently documented the use of neural networks by an Italian research institute to test the purity of olive oil. These examples are indicative of the broad range of applications that can be found for neural networks. The applications

43、are expanding because neural networks are good at solving problems, not just in engineering, science and mathematics, but m medicine, business, finance and literature as well. Their application to a wide variety of problems in many fields makes them very attractive. Also, faster computers and faster

44、 algorithms have made it possible to use neural networks to solve complex industrial problems that formerly required too much computation.The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MATLAB with the permission of the Math Works,

45、Inc.The 1988 DARPA Neural Network Study DARP88 lists various neural network applications, beginning with the adaptive channel equalizer in about 1984. This device, which is an outstanding commercial success, is a single-neuron network used in long distance telephone systems to stabilize voice signal

46、s. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier and a risk analysis system.Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the lit

47、erature follows.AerospaceHigh performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectorsAutomotiveAutomobile automatic guidance systems, warranty activity analyzers Banking Check and o

48、ther document readers, credit application evaluatorsDefenseWeapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identificationElec

49、tronicsCode sequence prediction, integrated circuit chip layout, process control, chip failure analysis, machine vision, voice synthesis, nonlinear modelingEntertainmentAnimation, special effects, market forecastingFinancialReal estate appraisal, loan advisor, mortgage screening, corporate bond rati

50、ng, credit line use analysis, portfolio trading program, corporate financial analysis, currency price predictionInsurancePolicy application evaluation, product optimizationManufacturingManufacturing process control, product design and analysis, process and machine diagnosis, real-time particle ident

51、ification, visual quality inspection systems, beer testing, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling

52、of chemical process systemsMedicalBreast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, emergency room test advisement 0il and GasExplorationRoboticsTrajectory control, forklift robot, manipul

53、ator controllers, vision systemsSpeechSpeech recognition, speech compression, vowel classification, text to speech synthesisSecuritiesMarket analysis, automatic bond rating, stock trading advisory systemsTelecommunicationsImage and data compression, automated information services,real-time translati

54、on of spoken language, customer payment processing systemsTransportationTruck brake diagnosis systems, vehicle scheduling, routing systemsConclusionThe number of neural network applications, the money that has been invested in neural network software and hardware, and the depth and breadth of intere

55、st in these devices have been growing rapidly. Biological InspirationThe artificial neural networks discussed in this text are only remotely related to their biological counterparts. In this section we will briefly describe those characteristics of brain function that have inspired the development o

56、f artificial neural networks.The brain consists of a large number (approximately 1011) of highly connected elements (approximately 104 connections per element) called neurons. For our purposes these neurons have three principal components: the dendrites, the cell body and the axon. The dendrites are

57、 tree-like receptive networks of nerve fibers that carry electrical signals into the cell body. The cell body effectively sums and thresholds these incoming signals. The axon is a single long fiber that carries the signal from the cell body out to other neurons. The point of contact between an axon

58、of one cell and a dendrite of another cell is called a synapse. It is the arrangement of neurons and the strengths of the individual synapses, determined by a complex chemical process, that establishes the function of the neural network. Figure 6.1 is a simplified schematic diagram of two biological neurons.Figure 6.1 Schematic Drawing of Biological NeuronsSome of the neural structure is defined at birth. Other parts are developed through learning, as new connections are made and others waste away. This developm

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論