人工智能外文翻譯文獻_第1頁
人工智能外文翻譯文獻_第2頁
人工智能外文翻譯文獻_第3頁
人工智能外文翻譯文獻_第4頁
人工智能外文翻譯文獻_第5頁
已閱讀5頁,還剩5頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

1、文獻信息:文獻標題:Research Priorities for Robust and Beneficial Artificial Intelligence(穩(wěn)健和有益的人工智能的爭辯重點)國外作者:Stuart Russell, Daniel Dewey, Max Tegmark文 獻 出 處 : AssociationfortheAdvancementofArtificial Intelligence,2015,36(4):105-114字數(shù)統(tǒng)計:英文 2887 單詞,16400 字符;中文 5430 漢字外文文獻:Research Priorities for Robust and B

2、eneficial Artificial IntelligenceAbstractSuccess in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous example

3、s (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.Keywords: artificial intelligence, superintelligence, robust, beneficial, safety, societyArtificial intelligence (AI) research has explored a variety of

4、 problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents systems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic

5、notions of rationality colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic representations and statisticallearningmethodshasledtoalargedegreeofintegrationand cross-fertilizationbetweenAI,machinelearning,statistics,controltheory, neuroscience, and oth

6、er fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkablesuccesses in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and qu

7、estion-answering systems.As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.

8、There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intell

9、igence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it isvaluable to investigate how to reap its benefits while avoiding potential pitfalls.Short-term Research Priorities Optimizing AIs Economic Impac

10、tThesuccessesofindustrialapplicationsofAI,frommanufacturingto information services, demonstrate a growing impact on the economy, although there is disagreement about the exact nature of this impact and on how to distinguish between the effects of AI and those of other information technologies. Many

11、economists and computer scientists agree that there is valuable research to be done on how to maximize the economic benefits of AI while mitigating adverse effects, which could includeincreasedinequalityandunemployment(Mokyr 2014;Brynjolfssonand McAfee 2014; Frey and Osborne 2013; Glaeser 2014; Shan

12、ahan 2015; Nilsson 1984; Manyika et al. 2013). Such considerations motivate a range of research directions, spanning areas from economics to psychology. Below are a few examples that should by no means be interpreted as an exhaustive list.Labor market forecasting: When and in what order should we ex

13、pect variousjobs to become automated (Frey and Osborne 2013)? How will this affect the wages of less skilled workers, the creative professions, and different kinds of informationworkers? Some have have argued that AI is likely to greatly increase the overall wealth of humanity as a whole (Brynjolfss

14、on and McAfee 2014). However, increased automation may push income distribution further towards a power law (Brynjolfsson, McAfee, and Spence 2014), and the resulting disparity may fall disproportionately along lines of race, class, and gender; research anticipating the economic and societal impact

15、of such disparity could be useful.Other market disruptions:Significant parts of the economy, including finance, insurance, actuarial, and many consumer markets, could be susceptible to disruption through the use of AI techniques to learn, model, and predict human and market behaviors. These markets

16、might be identified by a combination of high complexity and high rewards for navigating that complexity (Manyika et al. 2013).Policy for managing adverse effects: What policies could help increasingly automated societies flourish? For example, Brynjolfsson and McAfee (Brynjolfsson andMcAfee2014)expl

17、orevariouspoliciesforincentivizingdevelopmentof labor-intensive sectors and for using AI -generated wealth to support underemployed populations. What are the pros and cons of interventions such as educational reform, apprenticeship programs, labor-demanding infrastructure projects, and changes to mi

18、nimum wage law, tax structure, and the social safety net (Glaeser 2014)? History provides many examples of subpopulations not needing to work for economic security, ranging from aristocrats in antiquity to many present-day citizens of Qatar. What societal structures and other factors determine wheth

19、er such populations flourish? Unemploymentisnotthesameasleisure,andtherearedeeplinksbetween unemployment and unhappiness, self-doubt, and isolation (Hetschko, Knabe, and Scho b 2014; Clark and Oswald 1994); understanding what policies and norms can break these links could significantly improve the m

20、edian quality of life. Empirical and theoretical research on topics such as the basic income proposal could clarify our options (Van Parijs 1992; Widerquist et al. 2013).Economic measures: It is possible that economic measures such as real GDP per capita do not accurately capture the benefits and de

21、triments of heavily AI-and-automation-based economies, making these metrics unsuitable for policypurposes(Mokyr2014).Researchonimprovedmetricscouldbeusefulfor decision-making.Law and Ethics ResearchThe development of systems that embody significant amounts of intelligence and autonomy leads to impor

22、tant legal and ethical questions whose answers impact both producers and consumers of AI technology. These questions span law, public policy, professional ethics, and philosophical ethics, and will require expertise from computer scientists, legal experts, political scientists, and ethicists. For ex

23、ample:Liability and law for autonomous vehicles:If self -driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. In what legal framework can the safety benefits of autonomous vehiclessuch as drone aircraft an

24、d self-driving cars best be realized (Vladeck 2014)? Should legal questions about AI be handled by existing (software- and internet-focused) cyberlaw, or should they be treated separately (Calo 2014b)? In both military and commercial applications, governments will need to decide how best to bring th

25、e relevant expertise to bear; for example, a panel or committee of professionals and academics could be created, and Calo has proposed the creation of a Federal Robotics Commission (Calo 2014a).Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury toa human a

26、gainst the near -certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the public on these issues?Should such trade-offs be the subject of national standards?Autonomous weapons: Can lethal autonomous weapons be made to comply with humanitarian law (Churchill and

27、Ulfstein 2000)? If, as some organizations have suggested, autonomous weapons should be banned (Docherty 2012), is it possible to develop a precise definition of autonomy for this purpose, and can such a ban practically be enforced? If it is permissible or legal to use lethal autonomous weapons, how

28、should these weapons be integrated into the existing command-and-control structure so that responsibility and liability remain associated with specific human actors? What technical realities and forecasts should inform these questions, and howshould meaningful human control over weapons be defined (

29、Roff 2013, 2014; Anderson, Reisner, and Waxman 2014)? Are autonomous weapons likely to reduce political aversion to conflict, or perhaps result in accidental battles or wars (Asaro 2008)? Would such weapons become the tool of choice for oppressors or terrorists?Finally, how can transparency and publ

30、ic discourse best be encouraged on these issues?Privacy: How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy? How will privacy risks interact with cybersecurity and cyberwarfare (Singer and Fried

31、man 2014)? Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy (Manyika et al. 2011; Agrawal and Srikant 2000).Professional ethics: What role should computer scientists play in the law and ethics of AI developmen

32、t and use? Past and current projects to explore these questions include the AAAI 200809 Presidential Panel on Long -Term AI Futures (Horvitz and Selman 2009), the EPSRC Principles of Robotics (Boden et al. 2011), and recently announced programs such as Stanfords One-Hundred Year Study of AI and the

33、AAAI Committee on AI Impact and Ethical Issues.Long-term research prioritiesA frequently discussed long-term goal of some AI researchers is to develop systems that can learn from experience with human-like breadth and surpass human performance in most cognitive tasks, thereby having a major impact o

34、n society. If there is a non -negligible probability that these efforts will succeed in the foreseeable future, then additional current research beyond that mentioned in the previous sections will be motivated as exemplified below, to help ensure that the resulting AI will be robust and beneficial.V

35、erificationReprisingthethemesofshort-termresearch,researchenablingverifiable low-level software and hardware can eliminate large classes of bugs and problems ingeneral AI systems; if such systems become increasingly powerful and safety -critical, verifiablesafetypropertieswillbecomeincreasinglyvalua

36、ble.Ifthetheoryof extending verifiable properties from components to entire systems is well understood, then even very large systems can enjoy certain kinds of safety guarantees, potentially aided by techniques designed explicitly to handle learning agents and high-level properties. Theoretical rese

37、arch, especially if it is done explicitly with very general and capable AI systems in mind, could be particularly useful.A related verification research topic that isdistinctive to long-term concerns is the verifiability of systems that modify, extend, or improve themselves, possibly many times in s

38、uccession (Good 1965; Vinge 1993). Attempting to straightforwardly apply formalverificationtoolstothismoregeneralsettingpresentsnewdifficulties, including the challenge that a formal system that is sufficiently powerful cannot use formalmethodsintheobviouswaytogainassuranceabouttheaccuracyof functio

39、nallysimilarformalsystems,onpainofinconsistencyviaGo dels incompleteness (Fallenstein and Soares 2014; Weaver 2013). It is not yet clear whether or how this problem can be overcome, or whether similar problems will arise with other verification methods of similar strength.Finally, it is often diffic

40、ult to actually apply formal verification techniques tophysical systems, especially systems that have not been designed with verification in mind.Thismotivatesresearchpursuingageneraltheorythatlinksfunctional specification to physical states of affairs. This type of theory would allow use of formal

41、tools to anticipate and control behaviors of systems that approximate rational agents, alternate designs such as satisficing agents, and systems that cannot be easily describedinthestandardagentformalism(powerfulpredictionsystems, theorem-provers, limited-purpose science or engineering systems, etc.

42、). It may also be that such a theory could allow rigorous demonstrations that systems are constrained from taking certain kinds of actions or performing certain kinds of reasoning.ValidityAs in the short-term research priorities, validity is concerned with undesirable behaviors that can arise despit

43、e a systems formalcorrectness. In the long term, AIsystems might become more powerful and autonomous, in which case failures of validity could carry correspondingly higher costs.Strong guarantees for machine learning methods, an area we highlighted for short-term validity research, will also be impo

44、rtant for long -term safety. To maximize the long-term value of this work, machine learning research might focus on the types of unexpected generalization that would be most problematic for very general and capableAIsystems.In particular, itmightaimtounderstandtheoreticallyand practicallyhowlearnedr

45、epresentationsofhigh-levelhumanconceptscouldbe expectedtogeneralize(orfailto)inradicallynewcontexts(Tegmark 2015). Additionally, if some concepts could be learned reliably, it might be possible to use themtodefinetasksand constraints that minimize the chances of unintended consequences even when aut

46、onomous AI systems become very general and capable.Little work has been done on this topic, which suggests that both theoretical and experimental research may be useful.Mathematical tools such as formal logic, probability, and decision theory have yielded significant insight into the foundations of

47、reasoning and decision-making. However, there are still many open problems in the foundations of reasoning and decision. Solutions to these problems may make the behavior of very capable systems much more reliable and predictable. Example research topics in this area include reasoninganddecisionunde

48、rboundedcomputationalresourcesasHorvitzand Russell (Horvitz 1987; Russell and Subramanian 1995), how to take into account correlations between AI systems behaviors and those of their environments or of other agents (Tennenholtz 2004; LaVictoire et al. 2014; Hintze 2014; Halpern and Pass 2013; Soares

49、 and Fallenstein 2014c), how agents that are embedded in their environments should reason (Soares 2014a; Orseau and Ring 2012), and how to reason about uncertainty over logicalconsequences of beliefs or other deterministic computations (Soares and Fallenstein 2014b). These topics may benefit from be

50、ing considered together, since they appear deeply linked (Halpern and Pass 2011; Halpern, Pass, and Seeman 2014).In the long term, it is plausible that we will want to make agents that actautonomouslyandpowerfullyacrossmanydomains.Explicitlyspecifyingour preferences in broad domains in the style of

51、near-future machine ethics may not be practical, making aligning the values of powerful AI systems with our own values and preferences difficult (Soares 2014b; Soares and Fallenstein 2014a).SecurityIt is unclear whether long -term progress in AI will make the overall problem of security easier or ha

52、rder; on one hand, systems will become increasingly complex in construction and behavior and AI-based cyberattacks may be extremely effective, while on the other hand, the use of AI and machine learning techniques along with significant progress in low-level system reliability may render hardened sy

53、stems much less vulnerable than todays. From a cryptographic perspective, it appears that this conflict favors defenders over attackers; this may be a reason to pursue effective defense research wholeheartedly.Although the topics described in the near-term security research section above may become

54、increasingly important in the long term, very general and capable systems will pose distinctive security problems. In particular, if the problems of validity and control are not solved, it may be useful to create containers” for AI systems that could have undesirable behaviors and consequences in le

55、ss controlled environments (Yampolskiy 2012). Both theoretical and practical sides of this question warrantinvestigation.IfthegeneralcaseofAIcontainmentturnsouttobe prohibitively difficult, then it may be that designing an AI system and a container in parallel is more successful, allowing the weakne

56、sses and strengths of the design to inform the containment strategy (Bostrom 2014). The design of anomaly detection systems and automated exploit-checkers could be of significant help. Overall, it seems reasonable to expect this additional perspective defending against attacks from within” a system

57、aswellasfrom externalactorswillraise interesting and profitable questions in the field of computer security.ControlIthasbeenarguedthatverygeneralandcapableAIsystemsoperating autonomously to accomplish some task will often be subject to effects that increasethe difficulty of maintaining meaningful hu

58、man control (Omohundro 2007; Bostrom 2012, 2014; Shanahan 2015). Research on systems that are not subject to these effects, minimize their impact, or allow for reliable human control could be valuable in preventing undesired consequences, as could work on reliable and secure test -beds for AI system

59、s at a variety of capability levels.If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is anaturalsubgoal(Omohundro2007;Bostrom2012)(andconversely,seeking unconstrained situations is sometimes a useful heuristic (Wissner-Gross and Freer 2013). This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these change

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論