Artificial Intelligence in the Palm of Your Hand (6ページ)
原文(げんぶん)
Artificial intelligence techniques are increasingly showing up in smartphone applications. For example, Google has developed Google Goggles, a smartphone application providing a visual search engine. Just take a picture of a book, landmark, or sign using a smartphone's camera and Goggles will perform image processing, image analysis, and text recognition, and then initiate a Web search to identify the object. If you are an English speaker visiting in France, you can take a picture of a sign, menu, or other text and have it translated to English. Beyond Goggles, Google is actively working on voice-to-voice language translation. Soon you will be able to speak English into your phone and have your words spoken in Spanish, Chinese, or another language. Smartphones will undoubtedly get smarter as AI continues to be utilized in innovative ways.
翻訳文(へんやくもん)
智能手机应用中逐渐展现出了越来越多的人工智能技术。例如,谷歌研发了 Google Goggles,它是一个提供视觉搜索引擎的智能手机应用。只要用智能手机的摄像头拍摄一本书、某一地标或某一标记,Google Goggles 就会执行图像处理、图像分析以及文本识别,然后启动 Web 搜索来识别对象。如果讲英语的你正处在法国,你可以拍摄一张地标、菜单或其他文本的照片,然后 Google Goggles 会将其翻译为英文。除了 Google Goggles 以外,谷歌正在积极地研究声音对声音的语言翻译,很快你就可以用英语对着手机说话,然后让手机将其用西班牙语、中文或其他语言翻译出来。随着不断以创新的方式使用 AI,智能手机无疑会越来越智能。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| application |
/ˌæplɪˈkeɪʃən/ |
应用;应用程序 |
| camera |
/ˈkæmərə/ |
相机 |
| picture |
/ˈpɪktʃər/ |
图片;照片 |
| object |
/ˈɒbdʒɪkt/ |
物体;对象 |
| text |
/tɛkst/ |
文本;文字 |
| translation |
/trænzˈleɪʃən/ |
翻译 |
| language |
/ˈlæŋɡwɪdʒ/ |
语言 |
| voice |
/vɔɪs/ |
声音 |
| identify |
/aɪˈdɛntɪfaɪ/ |
识别;鉴定 |
| search |
/sɜːrtʃ/ |
搜索 |
| engine |
/ˈɛndʒɪn/ |
引擎 |
| image |
/ˈɪmɪdʒ/ |
图像 |
| processing |
/ˈprɑːsɛsɪŋ/ |
处理 |
| analysis |
/əˈnæləsɪs/ |
分析 |
| recognition |
/ˌrɛkəɡˈnɪʃən/ |
识别;认可 |
| initiate |
/ɪˈnɪʃieɪt/ |
开始;发起 |
| Web |
/wɛb/ |
网络;网页 |
| smartphone |
/ˈsmɑːrtfoʊn/ |
智能手机 |
| sign |
/saɪn/ |
标志;符号 |
| menu |
/ˈmɛnjuː/ |
菜单 |
| speaker |
/ˈspiːkər/ |
说话者 |
| visit |
/ˈvɪzɪt/ |
访问;参观 |
| develop |
/dɪˈvɛləp/ |
开发;发展 |
| provide |
/prəˈvaɪd/ |
提供 |
| work |
/wɜːrk/ |
工作;研究 |
| continue |
/kənˈtɪnjuː/ |
继续 |
| use |
/juːz/ |
使用;利用 |
| way |
/weɪ/ |
方式;方法 |
| get |
/ɡɛt/ |
变得;获得 |
| smarter |
/ˈsmɑːrtər/ |
更聪明的 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| artificial |
/ˌɑːrtɪˈfɪʃəl/ |
人工的;人造的 |
| intelligence |
/ɪnˈtɛlɪdʒəns/ |
智能;智力 |
| techniques |
/tɛkˈniːks/ |
技术;技巧 |
| increasingly |
/ɪnˈkriːsɪŋli/ |
越来越多地 |
| visual |
/ˈvɪʒuəl/ |
视觉的 |
| landmark |
/ˈlændˌmɑːrk/ |
地标 |
| undoubtedly |
/ʌnˈdaʊtɪdli/ |
无疑地;肯定地 |
| utilized |
/ˈjuːtəˌlaɪzd/ |
利用;使用 |
| innovative |
/ˈɪnəˌveɪtɪv/ |
创新的;革新的 |
Physical Agents (11ページ)
原文(げんぶん)
A physical agent (robot) is a programmable system that can be used to perform a variety of tasks. Simple robots can be used in manufacturing to do routine jobs such as assembling, welding, or painting. Some organizations use mobile robots that do delivery jobs such as distributing mail or correspondence to different rooms. There are mobile robots that are used underwater for prospecting for oil.
A humanoid robot is an autonomous mobile robot that is supposed to behave like a human. Although humanoid robots are prevalent in science fiction, there is still a lot of work to do before such robots will be able to interact properly with their surroundings and learn from events that occur there.
翻訳文(へんやくもん)
物理智能体(机器人)是一个用来完成各项任务的可编程系统。简单的机器人可以用在制造行业,从事一些日常的工作,如装配、焊接或油漆。有些组织使用移动机器人去做一些日常的分发工作,如分发邮件或信件到不同的房间。还有移动机器人可以在水下探测石油。
人形机器人是一种自治的移动机器人,它模仿人类的行为。虽然人形机器人在科幻小说中很流行,但是要使这种机器人能够合理地与周围环境交互并从环境中发生的事件中学习,还有很多工作要做。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| physical |
/ˈfɪzɪkəl/ |
物理的 |
| agent |
/ˈeɪdʒənt/ |
代理 |
| system |
/ˈsɪstəm/ |
系统 |
| task |
/tæsk/ |
任务 |
| simple |
/ˈsɪmpəl/ |
简单的 |
| manufacturing |
/ˌmænjuˈfæktʃərɪŋ/ |
制造 |
| job |
/dʒɒb/ |
工作 |
| organization |
/ˌɔːɡənaɪˈzeɪʃən/ |
组织 |
| mobile |
/ˈmoʊbəl/ |
移动的 |
| delivery |
/dɪˈlɪvəri/ |
交付 |
| mail |
/meɪl/ |
邮件 |
| room |
/ruːm/ |
房间 |
| underwater |
/ˌʌndərˈwɔːtər/ |
水下的 |
| oil |
/ɔɪl/ |
石油 |
| autonomous |
/ɔːˈtɒnəməs/ |
自主的 |
| behave |
/bɪˈheɪv/ |
表现 |
| human |
/ˈhjuːmən/ |
人类 |
| work |
/wɜːrk/ |
工作 |
| interact |
/ˌɪntərˈækt/ |
互动 |
| properly |
/ˈprɑːpərli/ |
正确地 |
| surroundings |
/səˈraʊndɪŋz/ |
环境 |
| learn |
/lɜːrn/ |
学习 |
| event |
/ɪˈvɛnt/ |
事件 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| programmable |
/ˈproʊɡræməbəl/ |
可编程的 |
| assembling |
/əˈsɛmblɪŋ/ |
装配 |
| welding |
/ˈwɛldɪŋ/ |
焊接 |
| painting |
/ˈpeɪntɪŋ/ |
涂装 |
| correspondence |
/ˌkɔːrəˈspɒndəns/ |
通信 |
| variety |
/vəˈraɪəti/ |
多样性 |
| humanoid |
/ˈhjuːmənˌɔɪd/ |
类人 |
| prevalent |
/ˈprɛvələnt/ |
流行的 |
| prospecting |
/ˈprɒspɛktɪŋ/ |
勘探 |
| science fiction |
/ˌsaɪəns ˈfɪkʃən/ |
科幻 |
Reasoning and Logic (26ページ)
原文(げんぶん)
Reasoning is the action of constructing thoughts into a valid argument. This is something you probably do every day. When you make a decision, you are using reasoning, taking different thoughts and making those thoughts into reasons why you should go with one option over the other options available. When you construct an argument, that argument will be either valid or invalid. A valid argument is reasoning that is comprehensive on the foundation of logic or fact.
Inductive and deductive reasoning are both forms of propositional logic. Propositional logic is the branch of logic that studies ways of joining and/or modifying entire propositions, statements or sentences to form more complicated propositions, statements or sentences. Inductive and deductive reasoning use propositional logic to develop valid arguments based on fact and reasoning. Both types of reasoning have a premise and a conclusion. How each type of reasoning gets to the conclusion is different.
翻訳文(へんやくもん)
推理是把思想构造成有效论点的行为。这可能是你每天都要做的事。当你作决定时,你在推理并构思不同的想法,然后把这些想法变成为什么你应该作出某个选择和非其他选择的理由。构造论点时,该论点可能是有效的也可能是无效的。-个有效的论点是在逻辑或事实基础上综合的推理。
归纳推理和演绎推理是命题逻辑的两种形式。命题逻辑是逻辑学的- -个分支,它研究如何连接和/或修改整个命题、语句或句子,以形成更复杂的命题、语句或句子。归纳推理和演绎推理都使用命题逻辑来组织基于事实和推理的有效论据。这两种推理都有前提和结论,每种类型的推理如何得出结论的过程是不同的。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| reasoning |
/ˈriːzənɪŋ/ |
推理 |
| action |
/ˈækʃən/ |
行动 |
| construct |
/kənˈstrʌkt/ |
构建 |
| thought |
/θɔːt/ |
思想 |
| valid |
/ˈvælɪd/ |
有效的 |
| argument |
/ˈɑːrɡjumənt/ |
论点 |
| decision |
/dɪˈsɪʒən/ |
决定 |
| different |
/ˈdɪfrənt/ |
不同的 |
| option |
/ˈɑːpʃən/ |
选项 |
| available |
/əˈveɪləbəl/ |
可用的 |
| invalid |
/ɪnˈvælɪd/ |
无效的 |
| foundation |
/faʊnˈdeɪʃən/ |
基础 |
| logic |
/ˈlɒdʒɪk/ |
逻辑 |
| fact |
/fækt/ |
事实 |
| conclusion |
/kənˈkluːʒən/ |
结论 |
| premise |
/ˈprɛmɪs/ |
前提 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| comprehensive |
/ˌkɒmprɪˈhensɪv/ |
综合的 |
| propositional |
/ˌprɒpəˈzɪʃənəl/ |
命题的 |
| modifying |
/ˈmɒdɪˌfaɪɪŋ/ |
修改 |
| proposition |
/ˌprɒpəˈzɪʃən/ |
命题 |
| statement |
/ˈsteɪtmənt/ |
陈述 |
| sentence |
/ˈsɛntəns/ |
句子 |
| develop |
/dɪˈvɛləp/ |
发展 |
| inductive |
/ɪnˈdʌktɪv/ |
归纳的 |
| deductive |
/dɪˈdʌktɪv/ |
演绎的 |
Semantic Network(30ページ)
原文(げんぶん)
Semantic network or semantic net was proposed by Quillian in 1967 in order to represent the knowledge in a form of graph. Semantic network is a technique of knowledge representation that is used for propositional information, and sometimes called a propositional net. In knowledge representation the semantic networks are two dimensional. In terms of mathematics a semantic network is defined as a labeled directed graph. The semantic network is composed of links, nodes and link labels. In the diagram the semantic network nodes are described as ellipses,circles or rectangles to show objects such as physical objects, situations or concepts. The links can be used to express the relationships between objects. A particular relation is specified by link labels. The basic structure of knowledge organizing is provided by relationships.
翻訳文(へんやくもん)
语义网络( Semantic network,简称语义网)是Quillian 于1967 年提出的一种以图形形式表示知识的网络。语义网络是一种用于命题信息的知识表示技术,有时也称为命题网络。在知识表示中,语义网络是二维的。在数学上,语义网络被定义为带有标签的有向图。语义网络由链接、节点和链接标签组成。在图中,语义网络节点被描述为椭圆圆或矩形,以显示物理对象、情景或概念等对象。链接可用于表示对象之间的关系。特定关系由链接标签指定。知识组织的基本结构是由关系组成的。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| network |
/ˈnɛtwɜrk/ |
网络 |
| propose |
/prəˈpoʊz/ |
提出 |
| knowledge |
/ˈnɒlɪdʒ/ |
知识 |
| form |
/fɔrm/ |
形式 |
| graph |
/ɡræf/ |
图形 |
| technique |
/tɛkˈnik/ |
技术 |
| representation |
/ˌrɛprɪzɛnˈteɪʃən/ |
表示 |
| information |
/ˌɪnfərˈmeɪʃən/ |
信息 |
| define |
/dɪˈfaɪn/ |
定义 |
| compose |
/kəmˈpoʊz/ |
组成 |
| link |
/lɪŋk/ |
链接 |
| node |
/noʊd/ |
节点 |
| label |
/ˈleɪbəl/ |
标签 |
| diagram |
/ˈdaɪəˌɡræm/ |
图表 |
| describe |
/dɪˈskraɪb/ |
描述 |
| circle |
/ˈsɜrkəl/ |
圆 |
| rectangle |
/ˈrɛkˌtæŋɡəl/ |
矩形 |
| object |
/ˈɑbdʒɪkt/ |
物体 |
| situation |
/ˌsɪtʃuˈeɪʃən/ |
情况 |
| concept |
/ˈkɒnsɛpt/ |
概念 |
| express |
/ɪkˈsprɛs/ |
表达 |
| relation |
/rɪˈleɪʃən/ |
关系 |
| basic |
/ˈbeɪsɪk/ |
基本的 |
| structure |
/ˈstrʌktʃər/ |
结构 |
| organize |
/ˈɔrɡəˌnaɪz/ |
组织 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| semantic |
/sɪˈmæntɪk/ |
语义的 |
| dimensional |
/dɪˈmɛnʃənəl/ |
维度的 |
| propositional |
/ˌprɒpəˈzɪʃənəl/ |
命题的 |
| mathematics |
/ˌmæθəˈmætɪks/ |
数学 |
| ellipse |
/ɪˈlɪps/ |
椭圆 |
Non-monotonic Logic (45ページ)
原文(げんぶん)
Everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worthwhile or even necessary (for example, in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible, that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of abduction, inference to the best explanation, and so on). More recently, logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic, and artificial intelligence.
翻訳文(へんやくもん)
日常推理大多是非单调的,因为它涉及风险,即我们会在演绎不足的前提下得出结论。我们知道什么时候冒这个险是值得的,甚至是必要的(例如,在医学诊断中)。然而,我们也意识到这样的推论是“可废止的”,因为新的信息可能会破坏旧的结论。传统上,各种可废止的但非常成功的推论(归纳理论、皮尔斯的诱拐理论、最佳解释的推论等)引起了哲学家们的注意。最近,逻辑学家开始从形式的角度来探讨这一现象,并在哲学、逻辑和人工智能的界面上形成了一个庞大的理论体系。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| reasoning |
/ˈriːzənɪŋ/ |
推理 |
| risk |
/rɪsk/ |
风险 |
| jump |
/dʒʌmp/ |
跳 |
| conclusion |
/kənˈkluːʒən/ |
结论 |
| premises |
/ˈprɛmɪsɪz/ |
前提 |
| worthwhile |
/ˌwɜrθˈwaɪl/ |
值得的 |
| necessary |
/ˈnɛsəˌsɛri/ |
必要的 |
| medical |
/ˈmɛdɪkəl/ |
医疗的 |
| diagnosis |
/ˌdaɪəɡˈnoʊsɪs/ |
诊断 |
| take |
/teɪk/ |
承受 |
| aware |
/əˈwɛr/ |
意识到 |
| new |
/nu/ |
新的 |
| information |
/ˌɪnfərˈmeɪʃən/ |
信息 |
| undermine |
/ˌʌndərˈmaɪn/ |
破坏 |
| old |
/oʊld/ |
旧的 |
| philosophers |
/fɪˈlɒsəfərz/ |
哲学家 |
| approach |
/əˈproʊtʃ/ |
接近 |
| logic |
/ˈlɑdʒɪk/ |
逻辑 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| non-monotonic |
/nɒn-məˈnɒtənɪk/ |
非单调的 |
| deductively |
/dɪˈdʌktɪvli/ |
演绎地 |
| insufficient |
/ˌɪnsəˈfɪʃənt/ |
不足的 |
| defeasible |
/dɪˈfiːsəbl/ |
可废止的 |
| induction |
/ɪnˈdʌkʃən/ |
归纳 |
| abduction |
/æbˈdʌkʃən/ |
诱拐 |
| interface |
/ˈɪntərˌfeɪs/ |
界面 |
| artificial intelligence |
/ˌɑːrtɪˈfɪʃəl ɪnˈtɛlɪdʒəns/ |
人工智能 |
Probabilistic Reasoning(50ページ)
原文(げんぶん)
Probabilistic reasoning is a way of knowledge representation where we apply the concept of probability to indicate the uncertainty in knowledge. In probabilistic reasoning, we combine probability theory with logic to handle the uncertainty.
We use probability in probabilistic reasoning because it provides a way to handle the uncertainty that is the result of someone's laziness and ignorance. In the real world, there are lots of scenarios where the certainty of something is not confirmed, such as "It will rain today," "the behavior of someone in some situations," or "a match between two teams or two players." These are probable sentences for which we can assume that it will happen but not sure about it, so here we use probabilistic reasoning.
翻訳文(へんやくもん)
概率推理是应用概率概念来表示知识不确定性的一种知识表示方法。在概率推理中,我们将概率论与逻辑相结合来处理不确定性。
我们在概率推理中使用概率,是因为它提供了一种方法来处理由某人的懒惰或无知造成的不确定性。在现实世界中,有很多情况下,某些事情的确定性是不确定的,比如“今天会下雨”“某个人在某些情况下的行为”“两队或两名球员之间的比赛”。这些都是不确定的描述,我们可以假设它会发生,但并不确定,所以此时我们使用概率推理。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| reasoning |
/ˈriːzənɪŋ/ |
推理 |
| way |
/weɪ/ |
方式 |
| knowledge |
/ˈnɒlɪdʒ/ |
知识 |
| representation |
/ˌrɛprɪˌzɛnˈteɪʃən/ |
表示 |
| apply |
/əˈplaɪ/ |
应用 |
| concept |
/ˈkɒnsɛpt/ |
概念 |
| probability |
/ˌprɒbəˈbɪlɪti/ |
概率 |
| indicate |
/ˈɪndɪˌkeɪt/ |
表示 |
| uncertainty |
/ʌnˈsɜrtnti/ |
不确定性 |
| combine |
/kəmˈbaɪn/ |
结合 |
| theory |
/ˈθɪəri/ |
理论 |
| logic |
/ˈlɒdʒɪk/ |
逻辑 |
| handle |
/ˈhændəl/ |
处理 |
| result |
/rɪˈzʌlt/ |
结果 |
| real |
/rɪəl/ |
真实的 |
| world |
/wɜːrld/ |
世界 |
| scenario |
/səˈnɑːrioʊ/ |
情景 |
| confirm |
/kənˈfɜrm/ |
确认 |
| match |
/mætʃ/ |
比赛 |
| team |
/tiːm/ |
队 |
| player |
/ˈpleɪər/ |
球员 |
| probable |
/ˈprɒbəbəl/ |
可能的 |
| assume |
/əˈsjuːm/ |
假设 |
| happen |
/ˈhæpən/ |
发生 |
| sure |
/ʃʊr/ |
确定的 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| probabilistic |
/ˌprɒbəˈbɪlɪstɪk/ |
概率的 |
| laziness |
/ˈleɪzinəs/ |
懒惰 |
| ignorance |
/ˈɪɡnərəns/ |
无知 |
| sentence |
/ˈsɛntəns/ |
句子 |
Travelling Salesman Problem(63ページ)
原文(げんぶん)
A salesman wants to travel to N cities (he should pass by each city). How can we order the cities so that the salesman's journey will be the shortest? The objective function to minimize here is the length of the journey (the sum of the distances between all the cities in a specified order).
To start solving this problem, we need:
(1)Configuration setting: This is the permutation of the cities from 1 to N, given in all orders. Selecting an optimal one between these permutations is our aim.
(2)Rearrangement strategy: The strategy that we will follow here is replacing sections of the path and replacing them with random ones to retest if this modified one is optimal or not.
(3)The objective function (which is the aim of the minimization): This is the sum of the distances between all the cities for a specific order.
翻訳文(へんやくもん)
一名推销员想到访N个城市(需要经过每个城市)。我们怎样排列到访顺序才能使推销员的行程最短?此处需要最小化的目标函数是行程的长度(所有城市之间的距离按指定顺序的总和)。
要解决这个问题,需要:
(1) 配置设置:城市从1到N的所有可能排列。我们的目标是在这些排列中选择一个最佳排列。
(2) 重新安排策略:我们将遵循的策略是用随机路径替换部分路径,然后重新测试这个修改后的路径是否是最优的。
(3) 目标函数(这是最小化的目标):某个特定顺序的所有城市之间距离的总和。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| travelling |
/ˈtrævəlɪŋ/ |
旅行 |
| salesman |
/ˈseɪlzmən/ |
推销员 |
| want |
/wɑːnt/ |
想要 |
| travel |
/ˈtrævəl/ |
旅行 |
| city |
/ˈsɪti/ |
城市 |
| pass |
/pæs/ |
经过 |
| each |
/iːtʃ/ |
每个 |
| order |
/ˈɔːrdər/ |
排列 |
| journey |
/ˈdʒɜːrni/ |
旅程 |
| shortest |
/ˈʃɔːrtəst/ |
最短的 |
| length |
/lɛŋkθ/ |
长度 |
| sum |
/sʌm/ |
总和 |
| distance |
/ˈdɪstəns/ |
距离 |
| between |
/bɪˈtwiːn/ |
在...之间 |
| solve |
/sɑlv/ |
解决 |
| problem |
/ˈprɑbləm/ |
问题 |
| need |
/niːd/ |
需要 |
| optimal |
/ˈɑptəməl/ |
最优的 |
| strategy |
/ˈstrætədʒi/ |
策略 |
| replace |
/rɪˈpleɪs/ |
替换 |
| section |
/ˈsɛkʃən/ |
部分 |
| path |
/pæθ/ |
路径 |
| random |
/ˈrændəm/ |
随机的 |
| retest |
/ˌriˈtɛst/ |
重新测试 |
| objective |
/əbˈdʒɛktɪv/ |
目标 |
| specific |
/spɪˈsɪfɪk/ |
特定的 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| permutation |
/ˌpɜːrmjuˈteɪʃən/ |
排列 |
| configuration |
/kənˌfɪɡjʊˈreɪʃən/ |
配置 |
| rearrangement |
/ˌriːəˈreɪndʒmənt/ |
重新安排 |
| minimization |
/ˌmɪnɪmaɪˈzeɪʃən/ |
最小化 |
Evolutionary Programming(67ページ)
原文(げんぶん)
When applied to the task of program development, the genetic algorithm approach is known as evolutionary programming. Here the goal is to develop programs by allowing them to evolve rather than by explicitly writing them. Researchers have applied evolutionary programming techniques to the program development process using functional programming languages. The approach has been to start with a collection of programs that contain a rich variety of functions. The functions in this starting collection form the "gene pool" from which future generations of programs will be constructed. One then allows the evolutionary process to run for many generations, hoping that by producing each generation from the best performers in the previous generation, a solution to the target problem will evolve.
翻訳文(へんやくもん)
当应用于程序开发时,使用遗传算法的方法称为进化规划(evolutionary programming)。此时,我们的目标是通过模拟进化过程开发程序,而不是直接编写程序。研究人员已经用函数式程序设计语言将进化规划技术应用于程序开发过程。该方法首先创建了一个包含各种函数的程序集合。初始集合中的函数构成了“基因池”,而之后的各代程序将通过“基因池”来构建。接下来,我们允许进化过程执行很多代,期望每次通过上一代中的最佳组合生成新的一代,从而让目标问题的解决方案逐步进化。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| evolutionary |
/ˌɛvəˈluːʃənəri/ |
进化的 |
| programming |
/ˈproʊˌɡræmɪŋ/ |
编程 |
| apply |
/əˈplaɪ/ |
应用 |
| task |
/tæsk/ |
任务 |
| program |
/ˈproʊɡræm/ |
程序 |
| development |
/dɪˈvɛləpmənt/ |
开发 |
| genetic |
/dʒɪˈnɛtɪk/ |
遗传的 |
| algorithm |
/ˈælɡəˌrɪðəm/ |
算法 |
| approach |
/əˈproʊtʃ/ |
方法 |
| goal |
/ɡoʊl/ |
目标 |
| develop |
/dɪˈvɛləp/ |
开发 |
| allow |
/əˈlaʊ/ |
允许 |
| evolve |
/ɪˈvɑlv/ |
进化 |
| write |
/raɪt/ |
写 |
| researcher |
/rɪˈsɜrtʃər/ |
研究员 |
| technique |
/tɛkˈnik/ |
技术 |
| process |
/ˈprɑsɛs/ |
过程 |
| use |
/juːz/ |
使用 |
| language |
/ˈlæŋɡwɪdʒ/ |
语言 |
| start |
/stɑrt/ |
开始 |
| collection |
/kəˈlɛkʃən/ |
集合 |
| contain |
/kənˈteɪn/ |
包含 |
| function |
/ˈfʌŋkʃən/ |
功能 |
| form |
/fɔrm/ |
形成 |
| generation |
/ˌdʒɛnəˈreɪʃən/ |
代 |
| construct |
/kənˈstrʌkt/ |
构建 |
| hope |
/hoʊp/ |
希望 |
| produce |
/prəˈdus/ |
生产 |
| performer |
/pərˈfɔrmər/ |
表现者 |
| previous |
/ˈpriviəs/ |
先前的 |
| solution |
/səˈluʃən/ |
解决方案 |
| target |
/ˈtɑrɡət/ |
目标 |
| problem |
/ˈprɑbləm/ |
问题 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| explicitly |
/ɪkˈsplɪsɪtli/ |
明确地 |
| functional |
/ˈfʌŋkʃənl/ |
函数式的 |
| gene pool |
/dʒin pul/ |
基因池 |
Support Vector Machine (SVM)(84ページ)
原文(げんぶん)
A support vector machine is a supervised learning algorithm that sorts data into two categories. It is trained with a series of data already classified into two categories, building the model as it is initially trained. The task of an SVM algorithm is to determine which category a new data point belongs to. This makes SVM a kind of non-binary linear classifier.
An SVM algorithm should not only place objects into categories, but have the margins between them on a graph as wide as possible.
翻訳文(へんやくもん)
支持向量机是一种有监督的学习算法,它将数据分为两类。它在最初训练时建立模型,并使用一系列已经分为两类的数据进行训练。支持向量机算法的任务是确定一个新的数据点应归入哪一类。这使得支持向量机成为一种非二元性的线性分类器。
支持向量机算法不仅要将对象分类,而且要使图中的类别间的边界尽可能宽。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| support |
/səˈpɔːrt/ |
支持 |
| vector |
/ˈvɛktər/ |
向量 |
| machine |
/məˈʃiːn/ |
机器 |
| supervised |
/ˈsuːpərˌvaɪzd/ |
有监督的 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| algorithm |
/ˈælɡəˌrɪðəm/ |
算法 |
| sort |
/sɔrt/ |
分类 |
| data |
/ˈdeɪtə/ |
数据 |
| category |
/ˈkætəɡəri/ |
类别 |
| train |
/treɪn/ |
训练 |
| series |
/ˈsɪriz/ |
系列 |
| classify |
/ˈklæsɪˌfaɪ/ |
分类 |
| build |
/bɪld/ |
构建 |
| model |
/ˈmɑdəl/ |
模型 |
| determine |
/dɪˈtɜrmɪn/ |
确定 |
| new |
/nu/ |
新的 |
| point |
/pɔɪnt/ |
点 |
| belong |
/bɪˈlɔŋ/ |
属于 |
| kind |
/kaɪnd/ |
种类 |
| linear |
/ˈlɪniər/ |
线性的 |
| place |
/pleɪs/ |
放置 |
| object |
/ˈɑbdʒɪkt/ |
物体 |
| graph |
/ɡræf/ |
图形 |
| wide |
/waɪd/ |
宽的 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| non-binary |
/nɒn-ˈbaɪnəri/ |
非二元的 |
| margin |
/ˈmɑrdʒɪn/ |
边缘 |
| initially |
/ɪˈnɪʃəli/ |
最初 |
| classifier |
/ˈklæsɪˌfaɪər/ |
分类器 |
Ensemble Learning(90ページ)
原文(げんぶん)
Many ensemble learning tools can be trained to produce various results. Individual algorithms may be stacked on top of each other, or rely on a bucket of models method of evaluating multiple methods for one system. In some cases, multiple data sets are aggregated and combined. For example, a geographic research program may use multiple methods to assess the prevalence of items in a geographic space. One of the issues with this type of research involves making sure that various models are independent, and that the combination of data is practical and works in a particular scenario.
Ensemble learning methods are included in different types of statistical software packages. Some experts describe ensemble learning as crowdsourcing of data aggregation.
翻訳文(へんやくもん)
可以通过训练许多集成学习工具以产生多种结果。单个的算法可以相互叠加,也可以依赖于一个系统评估多个方法的“模型桶”方法。在某些情况下,可以聚合或组合多个数据集。例如,地理研究项目可以使用多种方法来评估某种物质在地理空间中的分布。这类研究的一个问题是需要确保各种模型是独立的,并且数据的组合是实用的,还需要能在特定的情境中使用。
不同类型的统计软件包中都有集成学习方法。一些专家将集成学习描述为数据聚合的“众包”(crowdsourcing)。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| many |
/ˈmɛni/ |
许多 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| tool |
/tul/ |
工具 |
| train |
/treɪn/ |
训练 |
| produce |
/prəˈdus/ |
产生 |
| various |
/ˈvɛriəs/ |
各种各样的 |
| result |
/rɪˈzʌlt/ |
结果 |
| individual |
/ˌɪndɪˈvɪdʒuəl/ |
个别的 |
| algorithm |
/ˈælɡəˌrɪðəm/ |
算法 |
| stack |
/stæk/ |
堆叠 |
| rely |
/rɪˈlaɪ/ |
依赖 |
| model |
/ˈmɑdəl/ |
模型 |
| evaluate |
/ɪˈvæljueɪt/ |
评估 |
| method |
/ˈmɛθəd/ |
方法 |
| multiple |
/ˈmʌltɪpəl/ |
多种的 |
| system |
/ˈsɪstəm/ |
系统 |
| data |
/ˈdeɪtə/ |
数据 |
| combine |
/kəmˈbaɪn/ |
结合 |
| example |
/ɪɡˈzæmpəl/ |
例子 |
| use |
/juːz/ |
使用 |
| assess |
/əˈsɛs/ |
评估 |
| item |
/ˈaɪtəm/ |
项目 |
| issue |
/ˈɪʃu/ |
问题 |
| make sure |
/meɪk ʃʊr/ |
确保 |
| independent |
/ˌɪndɪˈpɛndənt/ |
独立的 |
| practical |
/ˈpræktɪkəl/ |
实用的 |
| work |
/wɜrk/ |
工作 |
| type |
/taɪp/ |
类型 |
| expert |
/ˈɛkspərt/ |
专家 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| ensemble |
/ɒnˈsɒmbəl/ |
集成 |
| aggregate |
/ˈæɡrɪˌɡeɪt/ |
聚合 |
| prevalence |
/ˈprɛvələns/ |
流行 |
| scenario |
/səˈnærioʊ/ |
情景 |
| statistical |
/stəˈtɪstɪkəl/ |
统计的 |
| crowdsourcing |
/ˈkraʊdˌsɔrsɪŋ/ |
众包 |
| aggregation |
/ˌæɡrɪˈɡeɪʃən/ |
聚合 |
Non-Linear Activation Function(104ページ)
原文(げんぶん)
Activation functions are any functions that define the output of a neuron. Theactivation function associated with each neurons in a neural network determines whetherit should be activated or not , based on the output of that function. There are three typesof activation functions - - Binary , Linear and Non-Linear activation function.
Input to the neural network is usually linear transformation (i.e. input * weight +bias).but most of the real world data are non-linear. So,to make that input non-linear,non-linear activation functions are used. Non-linear Activation is the functions that addnon-linearity into the network.
翻訳文(へんやくもん)
激活函数是定义神经元输出的任意函数。与神经网络中每个神经元相关联的激活函数根据该函数的输出来决定是否应该激活它。激活函数有三种类型:二元激活函数、线性激活函数和非线性激活函数。神经网络的输人通常是线性变换(即输人X权重+偏差),但大多数实际数据都是非线性的。因此,为了使输人非线性,使用了非线性激活函数。非线性激活是在网络中加入非线性的函数。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| function |
/ˈfʌŋkʃən/ |
函数 |
| define |
/dɪˈfaɪn/ |
定义 |
| output |
/ˈaʊtpʊt/ |
输出 |
| neuron |
/ˈnʊrɒn/ |
神经元 |
| network |
/ˈnɛtwɜrk/ |
网络 |
| determine |
/dɪˈtɜrmɪn/ |
决定 |
| whether |
/ˈwɛðər/ |
是否 |
| should |
/ʃʊd/ |
应该 |
| activate |
/ˈæktɪˌveɪt/ |
激活 |
| based on |
/beɪst ɑn/ |
基于 |
| type |
/taɪp/ |
类型 |
| binary |
/ˈbaɪnəri/ |
二元的 |
| linear |
/ˈlɪniər/ |
线性的 |
| non-linear |
/nɒn-ˈlɪniər/ |
非线性的 |
| input |
/ˈɪnpʊt/ |
输入 |
| usually |
/ˈjuːʒuəli/ |
通常 |
| transformation |
/ˌtrænsfərˈmeɪʃən/ |
变换 |
| weight |
/weɪt/ |
权重 |
| bias |
/ˈbaɪəs/ |
偏差 |
| real |
/rɪəl/ |
真实的 |
| world |
/wɜːrld/ |
世界 |
| data |
/ˈdeɪtə/ |
数据 |
| make |
/meɪk/ |
使 |
| add |
/æd/ |
加入 |
| non-linearity |
/nɒn-lɪˈnærɪti/ |
非线性 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| associated |
/əˈsoʊsieɪtɪd/ |
关联的 |
| transformation |
/ˌtrænsfərˈmeɪʃən/ |
变换 |
Feedforward Neural Network(108ページ)
原文(げんぶん)
The feedforward neural network,as a primary example of neural network design,has a limited architecture. Signals go from an input layer to additional layers. Someexamples of feedforward designs are even simpler. For example,a single-layer perceptronmodel has only one layer,with a feedforward signal moving from a layer to an individualnode. Multi-layer perceptron models, with more layers,are also feedforward.
In the days since scientists devised the first artificial neural networks, the technologyworld has made all sorts of progress in building more sophisticated models. There arerecurrent neural networks and other designs that contain loops or cycles. There aremodels that involve backpropagation, where the machine learning system essentiallyoptimizes by sending data back through a system. The feedforwardneural network doesnot involve any of this type of design,and so it is a unique type of system that is good for learning these designs for the first time.
翻訳文(へんやくもん)
前馈神经网络作为神经网络设计的一个主要例子,其结构是有限的。信号从一个输人层传到另一层。一些前馈网络的结构甚至更简单。例如,单层感知器模型只有一层,前馈信号从一层移动到单个节点。具有更多层的多层感知器模型,也是前馈的。
自从科学家发明了第一个人工神经网络以来,科技界在建立更复杂的模型方面取得了各种进展:有递归神经网络和其他包含循环的设计,还有些模型涉及反向传播,反向传播模型中的机器学习系统本质上是通过系统发送回数据来优化的。前馈神经网络不涉及任何此类设计,因此它是一种独特的 系统类型,有利于首次学习这些设计。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| feedforward |
/ˈfidˌfɔrwərd/ |
前馈 |
| neural |
/ˈnʊrəl/ |
神经的 |
| network |
/ˈnɛtwɜrk/ |
网络 |
| example |
/ɪɡˈzæmpəl/ |
例子 |
| design |
/dɪˈzaɪn/ |
设计 |
| limited |
/ˈlɪmɪtɪd/ |
有限的 |
| architecture |
/ˈɑrkɪˌtɛktʃər/ |
架构 |
| signal |
/ˈsɪɡnəl/ |
信号 |
| input |
/ˈɪnpʊt/ |
输入 |
| layer |
/ˈleɪər/ |
层 |
| additional |
/əˈdɪʃənəl/ |
额外的 |
| simple |
/ˈsɪmpəl/ |
简单的 |
| single |
/ˈsɪŋɡəl/ |
单一的 |
| perceptron |
/ˈpɜrsɛptrɒn/ |
感知器 |
| model |
/ˈmɑdəl/ |
模型 |
| move |
/muv/ |
移动 |
| individual |
/ˌɪndɪˈvɪdʒuəl/ |
单个的 |
| node |
/noʊd/ |
节点 |
| multi-layer |
/ˈmʌlti-ˈleɪər/ |
多层 |
| scientist |
/ˈsaɪəntɪst/ |
科学家 |
| artificial |
/ˌɑrtɪˈfɪʃəl/ |
人工的 |
| sophisticated |
/səˈfɪstɪˌkeɪtɪd/ |
复杂的 |
| loop |
/lup/ |
循环 |
| cycle |
/ˈsaɪkəl/ |
循环 |
| involve |
/ɪnˈvɑlv/ |
涉及 |
| system |
/ˈsɪstəm/ |
系统 |
| unique |
/juˈnik/ |
独特的 |
| learn |
/lɜrn/ |
学习 |
| time |
/taɪm/ |
时间 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| backpropagation |
/ˈbækˌprɒpəˈɡeɪʃən/ |
反向传播 |
| optimize |
/ˈɑptəˌmaɪz/ |
优化 |
| devise |
/dɪˈvaɪz/ |
发明 |
| recurrent |
/rɪˈkɜrənt/ |
递归的 |
LSTM Networks(123ページ)
原文(げんぶん)
Long Short Term Memory networks—usually just called LSTMs—are a special kind of RNN, capable of learning long-term dependencies. They were introduced by Hochreiter & Schmidhuber (1997), and were refined and popularized by many people in following work. They work tremendously well on a large variety of problems, and are now widely used.
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior, not something they struggle to learn!
翻訳文(へんやくもん)
长短期记忆网络通常被称为LSTM,是一种特殊的RNN,它能够学习长期依赖关系。它由Hochreiter & Schmidhuber(1997)提出,并在随后的工作中被许多人改进和推广。它们在各种各样的问题上表现得非常好,因此现在被广泛使用。
LSTM是为了避免长期依赖性问题而专门设计的。记住长期信息实际上是这种模型的默认行为,而无须专门学习。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| networks |
/ˈnɛtwɜrks/ |
网络 |
| usually |
/ˈjuːʒuəli/ |
通常 |
| called |
/kɔːld/ |
被叫做 |
| kind |
/kaɪnd/ |
种类 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| long-term |
/lɔːŋ-tɜrm/ |
长期的 |
| they |
/ðeɪ/ |
他们 |
| were |
/wɜr/ |
是(过去式) |
| and |
/ænd/ |
和 |
| refined |
/rɪˈfaɪnd/ |
精炼 |
| work |
/wɜrk/ |
工作 |
| problems |
/ˈprɑbləmz/ |
问题 |
| now |
/naʊ/ |
现在 |
| used |
/juːzd/ |
使用 |
| designed |
/dɪˈzaɪnd/ |
设计 |
| avoid |
/əˈvɔɪd/ |
避免 |
| information |
/ˌɪnfərˈmeɪʃən/ |
信息 |
| default |
/dɪˈfɔlt/ |
默认 |
| not |
/nɑt/ |
不 |
| something |
/ˈsʌmθɪŋ/ |
一些事情 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| LSTM |
/ɛl ɛs ti ɛm/ |
长短期记忆网络 |
| RNN |
/ɑr ɛn ɛn/ |
循环神经网络 |
| specific |
/spəˈsɪfɪk/ |
特定的 |
| Hochreiter |
/hɔkˌraɪtər/ |
霍赫赖特 |
| Schmidhuber |
/ʃmɪtˌhjuːbər/ |
施密德胡伯 |
| explicitly |
/ɪkˈsplɪsɪtli/ |
明确地 |
| dependency |
/dɪˈpɛndənsi/ |
依赖性 |
| remembering |
/rɪˈmɛmbərɪŋ/ |
记住 |
| practically |
/ˈpræktɪkli/ |
实际上 |
Regularization(127ページ)
原文(げんぶん)
Regularization is a way to avoid overfitting by penalizing high-valued regression coefficients. In simple terms, it reduces parameters and shrinks (simplifies) the model. This more streamlined, more parsimonious model will likely perform better at predictions. Regularization adds penalties to more complex models and then sorts potential models from least overfit to greatest; the model with the lowest overfitting score is usually the best choice for predictive power.
翻訳文(へんやくもん)
正则化是通过惩罚高值回归系数来避免过度拟合的一种方法。简单地说,它减少了参数并缩小(简化)了模型。这种更新型的、更精简的模型可能在预测方面表现更好。正则化可将惩罚加到更复杂的模型中,然后将潜在模型按照过拟合程度从小到大排序,过拟合程度最低的模型通常具有最佳的预测能力。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| way |
/weɪ/ |
方法 |
| avoid |
/əˈvɔɪd/ |
避免 |
| reduce |
/rɪˈdus/ |
减少 |
| model |
/ˈmɑdəl/ |
模型 |
| add |
/æd/ |
添加 |
| score |
/skɔr/ |
分数 |
| better |
/ˈbɛtər/ |
更好的 |
| usually |
/ˈjuːʒuəli/ |
通常 |
| choice |
/tʃɔɪs/ |
选择 |
| complex |
/ˈkɑmplɛks/ |
复杂的 |
| result |
/rɪˈzʌlt/ |
结果 |
| parameter |
/pəˈræmɪtər/ |
参数 |
| shrink |
/ʃrɪŋk/ |
缩小 |
| perform |
/pərˈfɔrm/ |
表现 |
| potential |
/pəˈtɛnʃəl/ |
潜在的 |
| high |
/haɪ/ |
高的 |
| least |
/list/ |
最少的 |
| likely |
/ˈlaɪkli/ |
可能的 |
| greatest |
/ˈɡreɪtəst/ |
最大的 |
| predictive |
/prɪˈdɪktɪv/ |
预测的 |
| simpler |
/ˈsɪmplər/ |
更简单的 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| regularization |
/ˌrɛɡjələrɪˈzeɪʃən/ |
正则化 |
| overfitting |
/ˈoʊvərˌfɪtɪŋ/ |
过拟合 |
| penalize |
/ˈpinəˌlaɪz/ |
惩罚 |
| regression |
/rɪˈɡrɛʃən/ |
回归 |
| coefficients |
/ˌkoʊɪˈfɪʃənts/ |
系数 |
| parsimonious |
/ˌpɑrsɪˈmoʊniəs/ |
简约的 |
| streamlined |
/ˈstrimˌlaɪnd/ |
流线型的 |
Markov Decision Process(143ページ)
原文(げんぶん)
Reinforcement Learning is a type of Machine Learning. It allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize their performance. Simple reward feedback is required for the agent to learn its behavior; this is known as the reinforcement signal.
There are many different algorithms that tackle this issue. As a matter of fact, Reinforcement Learning is defined by a specific type of problem, and all its solutions are classed as Reinforcement Learning algorithms. In the problem, an agent is supposed to decide the best action to select based on its current state. When this step is repeated, the problem is known as a Markov Decision Process.
翻訳文(へんやくもん)
强化学习是机器学习的一种。它允许机器和软件代理自动确定在特定上下文中的理想行为,从而最大限度地提高其性能。简单的奖励反馈是代理学习其行为所必需的,被称为强化信号。
有许多不同的算法可以解决这个问题。事实上,强化学习是由一类特定的问题定义的,它的所有解都被归类为强化学习算法。在这类问题中,代理应该根据其当前状态来决定要选择的最佳操作。当这一步骤重复时,这类问题被称为马尔可夫决策过程。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| Markov |
/ˈmɑrkɒf/ |
马尔可夫 |
| decision |
/dɪˈsɪʒən/ |
决策 |
| process |
/ˈprɑsɛs/ |
过程 |
| reinforcement |
/ˌriˌɪnˈfɔrsmənt/ |
强化 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| type |
/taɪp/ |
类型 |
| machine |
/məˈʃiːn/ |
机器 |
| software |
/ˈsɔftwɛr/ |
软件 |
| agent |
/ˈeɪdʒənt/ |
代理 |
| automatically |
/ˌɔtəmˈætɪkli/ |
自动地 |
| determine |
/dɪˈtɜrmɪn/ |
确定 |
| ideal |
/aɪˈdiəl/ |
理想的 |
| behavior |
/bɪˈheɪvjər/ |
行为 |
| specific |
/spəˈsɪfɪk/ |
特定的 |
| context |
/ˈkɒntɛkst/ |
环境 |
| maximize |
/ˈmæksɪˌmaɪz/ |
最大化 |
| performance |
/pərˈfɔrməns/ |
性能 |
| simple |
/ˈsɪmpəl/ |
简单的 |
| reward |
/rɪˈwɔrd/ |
奖励 |
| feedback |
/ˈfidˌbæk/ |
反馈 |
| learn |
/lɜrn/ |
学习 |
| signal |
/ˈsɪɡnəl/ |
信号 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| algorithm |
/ˈælɡəˌrɪðəm/ |
算法 |
| tackle |
/ˈtækəl/ |
解决 |
| issue |
/ˈɪʃu/ |
问题 |
| problem |
/ˈprɑbləm/ |
问题 |
| classed |
/klæsɪd/ |
分类的 |
| select |
/sɪˈlɛkt/ |
选择 |
Why "Deep" Q-Learning?(147ページ)
原文(げんぶん)
Q-learning is a simple yet quite powerful algorithm to create a cheat sheet for our agent. This helps the agent figure out exactly which action to perform.
But what if this cheat sheet is too long? Imagine an environment with 10,000 states and 1,000 actions per state. This would create a table of 10 million cells. Things will quickly get out of control!
It is pretty clear that we can't infer the Q-value of new states from already explored states. This presents two problems:
First, the amount of memory required to save and update that table would increase as the number of states increases.
Second, the amount of time required to explore each state to create the required Q-table would be unrealistic.
Here's a thought—what if we approximate these Q-values with machine learning models such as a neural network? Well, this was the idea behind DeepMind's algorithm that led to its acquisition by Google for 500 million dollars!
翻訳文(へんやくもん)
什么是“深度”Q-learning? Q-learning是一个简单但功能强大的算法,可以为我们的代理创建一个备忘录,从而有助于代理确定要执行的操作。
但是如果这个备忘录太长呢?假设一个环境有 10000个状态,每个状态有1000个操作。这将创建一个包含1000万个单元格的表。事态很快就会失控!
很明显,我们不能从已经探索的状态中推断出新状态的Q值。这带来了两个问题: 首先,存储和更新该表所需的内存量将随着状态数的增加而增加。
其次,探索每个状态创建所需Q表需要的时间是过于庞大的。
一个想法应运而生:如果我们用机器学习模型(比如神经网络)来估计这些Q值呢?这正是DeepMind算法背后的想法,导致它以5亿美元被谷歌收购。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| simple |
/ˈsɪmpəl/ |
简单的 |
| powerful |
/ˈpaʊərfəl/ |
强大的 |
| algorithm |
/ˈælɡəˌrɪðəm/ |
算法 |
| create |
/kriˈeɪt/ |
创建 |
| cheat |
/tʃit/ |
备忘单 |
| help |
/hɛlp/ |
帮助 |
| figure out |
/ˈfɪɡjər aʊt/ |
弄清楚 |
| action |
/ˈækʃən/ |
动作 |
| perform |
/pərˈfɔrm/ |
执行 |
| environment |
/ɪnˈvaɪrənmənt/ |
环境 |
| state |
/steɪt/ |
状态 |
| increase |
/ɪnˈkris/ |
增加 |
| memory |
/ˈmɛməri/ |
内存 |
| save |
/seɪv/ |
保存 |
| update |
/ˈʌpdeɪt/ |
更新 |
| time |
/taɪm/ |
时间 |
| unrealistic |
/ˌʌnrɪəˈlɪstɪk/ |
不切实际的 |
| thought |
/θɔt/ |
想法 |
| approximate |
/əˈprɑksɪˌmeɪt/ |
估计 |
| neural |
/ˈnʊrəl/ |
神经的 |
| network |
/ˈnɛtwɜrk/ |
网络 |
| idea |
/aɪˈdiə/ |
想法 |
| acquisition |
/ˌækwəˈzɪʃən/ |
收购 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| Q-learning |
/kju-ˈlɜrnɪŋ/ |
Q学习 |
| cheat sheet |
/tʃit ʃit/ |
备忘单 |
| Q-value |
/kju ˈvælju/ |
Q值 |
| explore |
/ɪkˈsplɔr/ |
探索 |
| DeepMind |
/dipˈmaɪnd/ |
深思考 |
Pattern Recognition(162ページ)
原文(げんぶん)
Pattern recognition is the process of recognizing patterns by using machine learning algorithms. Pattern recognition can be defined as the classification of data based on knowledge already gained or on statistical information extracted from patterns and/or their representation. One of the important aspects of pattern recognition is its application potential.
In a typical pattern recognition application, the raw data is processed and converted into a form that is amenable for a machine to use. Pattern recognition involves classification and clustering of patterns.
翻訳文(へんやくもん)
模式识别是利用机器学习算法进行识别模式的过程。模式识别可以定义为基于已经获得的知识或基于从模式及其表示中提取的统计信息的数据分类。模式识别的一个重要方面是它的应用潜力。
在典型的模式识别应用中,原始数据被处理并转换成一种适合机器使用的形式。模式识别涉及模式的分类和聚类。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| pattern |
/ˈpætərn/ |
模式 |
| recognition |
/ˌrɛkəɡˈnɪʃən/ |
识别 |
| process |
/ˈprɑsɛs/ |
过程 |
| recognize |
/ˈrɛkəɡˌnaɪz/ |
识别 |
| using |
/ˈjuzɪŋ/ |
使用 |
| machine |
/məˈʃiːn/ |
机器 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| algorithm |
/ˈælɡəˌrɪðəm/ |
算法 |
| classification |
/ˌklæsɪfɪˈkeɪʃən/ |
分类 |
| data |
/ˈdeɪtə/ |
数据 |
| based on |
/beɪst ɑn/ |
基于 |
| knowledge |
/ˈnɒlɪdʒ/ |
知识 |
| gain |
/ɡeɪn/ |
获得 |
| statistical |
/stəˈtɪstɪkəl/ |
统计的 |
| information |
/ˌɪnfərˈmeɪʃən/ |
信息 |
| aspect |
/ˈæspɛkt/ |
方面 |
| application |
/ˌæplɪˈkeɪʃən/ |
应用 |
| potential |
/pəˈtɛnʃəl/ |
潜力 |
| raw |
/rɔ/ |
原始的 |
| convert |
/kənˈvɜrt/ |
转换 |
| form |
/fɔrm/ |
形式 |
| amenable |
/əˈminəbəl/ |
适合的 |
| use |
/juz/ |
使用 |
| involve |
/ɪnˈvɑlv/ |
涉及 |
| clustering |
/ˈklʌstərɪŋ/ |
聚类 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| representation |
/ˌrɛprɪzɛnˈteɪʃən/ |
表示 |
| extracted |
/ɪkˈstræktɪd/ |
提取 |
Image Segmentation in Deep Learning(166ページ)
原文(げんぶん)
Many computer vision tasks require intelligent segmentation of an image, to understand what is in the image and enable easier analysis of each part. Today's image segmentation techniques use models of deep learning for computer vision to understand, at a level unimaginable only a decade ago, exactly which real-world object is represented by each pixel of an image.
Deep learning can learn patterns in visual inputs in order to predict object classes that make up an image. The main deep learning architecture used for image processing is a Convolutional Neural Network (CNN), or specific CNN frameworks like AlexNet, VGG, Inception, and ResNet. Models of deep learning for computer vision are typically trained and executed on specialized Graphics Processing Units (GPUs) to reduce computation time.
翻訳文(へんやくもん)
许多计算机视觉任务需要对图像进行智能分割,以了解图像中的内容,并使对每个部分的分析更容易。现代的图像分割技术使用计算机视觉的深度学习模型,以十年前难以想象的水平,准确地理解图像的每个像素代表了哪个真实世界的对象。
深度学习可以学习视觉输入中的模式,以便预测构成图像的对象类。用于图像处理的主要深度学习模型结构是卷积神经网络(CNN),或特定的CNN框架,如AlexNet、VGG、Inception和ResNet。计算机视觉的深度学习模型通常在专门的图形处理单元(GPU)上进行训练和运行,以减少计算时间。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| image |
/ˈɪmɪdʒ/ |
图像 |
| segmentation |
/ˌsɛɡmɛnˈteɪʃən/ |
分割 |
| deep |
/dip/ |
深度 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| many |
/ˈmɛni/ |
许多 |
| computer |
/kəmˈpjutər/ |
计算机 |
| vision |
/ˈvɪʒən/ |
视觉 |
| task |
/tæsk/ |
任务 |
| require |
/rɪˈkwaɪər/ |
需要 |
| intelligent |
/ɪnˈtɛlɪdʒənt/ |
智能的 |
| understand |
/ˌʌndərˈstænd/ |
理解 |
| enable |
/ɪˈneɪbəl/ |
使能够 |
| analysis |
/əˈnæləsɪs/ |
分析 |
| today |
/təˈdeɪ/ |
今天 |
| technique |
/tɛkˈnik/ |
技术 |
| model |
/ˈmɑdəl/ |
模型 |
| predict |
/prɪˈdɪkt/ |
预测 |
| main |
/meɪn/ |
主要的 |
| architecture |
/ˈɑrkɪˌtɛktʃər/ |
架构 |
| specific |
/spəˈsɪfɪk/ |
特定的 |
| framework |
/ˈfreɪmˌwɜrk/ |
框架 |
| reduce |
/rɪˈdus/ |
减少 |
| time |
/taɪm/ |
时间 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| convolutional |
/ˌkɑnvəˈluʃənəl/ |
卷积的 |
| AlexNet |
/ˈæləksˌnɛt/ |
亚历克斯网络 |
| VGG |
/viː dʒiː dʒiː/ |
视觉几何组 |
| Inception |
/ɪnˈsɛpʃən/ |
启发网络 |
| ResNet |
/ˈrɛzˌnɛt/ |
残差网络 |
| typically |
/ˈtɪpɪkli/ |
通常 |
| specialized |
/ˈspɛʃəˌlaɪzd/ |
专门的 |
| graphics |
/ˈɡræfɪks/ |
图形 |
| processing |
/ˈprɑsɛsɪŋ/ |
处理 |
| unit |
/ˈjunɪt/ |
单元 |
Machine Translation(180ページ)
原文(げんぶん)
Machine translation systems are applications or online services that use machine-learning technologies to translate large amounts of text from and to any of their supported languages. The service translates a "source" text from one language to a different "target" language.
Although the concepts behind machine translation technology and the interfaces to use it are relatively simple, the science and technologies behind it are extremely complex and bring together several leading-edge technologies, in particular, deep learning (artificial intelligence), big data, linguistics, cloud computing, and Web APIs.
翻訳文(へんやくもん)
机器翻译系统是类应用程序或在线服务,使用机器学习技术将大量文本从其支持的语言中翻译成另一种支持的语言。该服务将“源”文本从一种语言翻译成另一种“目标”语言。
尽管机器翻译技术背后的概念和使用它的接口相对简单,但背后的科学和技术却极其复杂。它汇集了一些前沿技术,特别是深度学习(人工智能)、大数据、语言学、云计算和Web API。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| machine |
/məˈʃiːn/ |
机器 |
| translation |
/trænsˈleɪʃən/ |
翻译 |
| system |
/ˈsɪstəm/ |
系统 |
| application |
/ˌæplɪˈkeɪʃən/ |
应用程序 |
| online |
/ˈɒnˌlaɪn/ |
在线的 |
| service |
/ˈsɜːrvɪs/ |
服务 |
| technology |
/tɛkˈnɑlədʒi/ |
技术 |
| translate |
/trænsˈleɪt/ |
翻译 |
| text |
/tɛkst/ |
文本 |
| source |
/sɔrs/ |
来源 |
| target |
/ˈtɑrɡɪt/ |
目标 |
| language |
/ˈlæŋɡwɪdʒ/ |
语言 |
| simple |
/ˈsɪmpəl/ |
简单的 |
| science |
/ˈsaɪəns/ |
科学 |
| complex |
/ˈkɑmplɛks/ |
复杂的 |
| bring |
/brɪŋ/ |
带来 |
| together |
/təˈɡɛðər/ |
一起 |
| deep |
/dip/ |
深度 |
| learning |
/ˈlɜrnɪŋ/ |
学习 |
| big data |
/bɪɡ ˈdeɪtə/ |
大数据 |
| linguistics |
/lɪŋˈɡwɪstɪks/ |
语言学 |
| cloud |
/klaʊd/ |
云 |
| computing |
/kəmˈpjutɪŋ/ |
计算 |
| API |
/ˌeɪˌpiːˈaɪ/ |
应用程序接口 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| interface |
/ˈɪntərˌfeɪs/ |
接口 |
| relatively |
/ˈrɛlətɪvli/ |
相对地 |
| supported |
/səˈpɔrtɪd/ |
支持的 |
| unimaginable |
/ˌʌnɪˈmædʒɪnəbl/ |
难以想象的 |
| leading-edge |
/ˈliːdɪŋ ɛdʒ/ |
前沿的 |
Word2vec(184ページ)
原文(げんぶん)
Word2vec is a two-layer neural net that processes text. Its input is a text corpus and its output is a set of vectors: feature vectors for words in that corpus. While Word2vec is not a deep neural network, it turns text into a numerical form that deep nets can understand. Deeplearning4j implements a distributed form of Word2vec for Java and Scala, which works on Spark with GPUs.
Word2vec's applications extend beyond parsing sentences in the wild. It can be applied just as well to genes, code, likes, playlists, social media graphs, and other verbal or symbolic series in which patterns may be discerned.
翻訳文(へんやくもん)
Word2vec 是一个处理文本的双层神经网络。它的输入是一个文本语料库,输出是一组向量,即该语料库中单词的特征向量。虽然 Word2vec 不是一种深度神经网络,但它可将文本转换为深度网络可以理解的数值形式。Deeplearning4j 为 Java 和 Scala 实现了 Word2vec 的一种分布式形式,它在 Spark 上通过 GPU 运行。
Word2vec 的应用不仅仅是解析句子。它也可以应用于基因、代码、喜好、播放列表、社交媒体图和其他可以识别模式的语言或符号序列。
単語(たんご)
初心者(しょしんしゃ)
| 单词 |
音标 |
翻译 |
| Word2vec |
/wɜrd tuː vɛk/ |
词向量工具 |
| neural |
/ˈnʊrəl/ |
神经的 |
| net |
/nɛt/ |
网络 |
| process |
/ˈprɑsɛs/ |
处理 |
| text |
/tɛkst/ |
文本 |
| input |
/ˈɪnpʊt/ |
输入 |
| output |
/ˈaʊtpʊt/ |
输出 |
| set |
/sɛt/ |
一组 |
| vector |
/ˈvɛktər/ |
向量 |
| feature |
/ˈfiːtʃər/ |
特征 |
| numerical |
/nuˈmɛrɪkəl/ |
数值的 |
| understand |
/ˌʌndərˈstænd/ |
理解 |
| Java |
/ˈdʒɑvə/ |
Java 编程语言 |
| Scala |
/ˈskɑlə/ |
Scala 编程语言 |
| Spark |
/spɑrk/ |
Spark 平台 |
| application |
/ˌæplɪˈkeɪʃən/ |
应用 |
| extend |
/ɪkˈstɛnd/ |
扩展 |
| sentence |
/ˈsɛntəns/ |
句子 |
| pattern |
/ˈpætərn/ |
模式 |
| symbolic |
/sɪmˈbɑlɪk/ |
符号的 |
上級者(じょうきゅうしゃ)
| 单词 |
音标 |
翻译 |
| corpus |
/ˈkɔrpəs/ |
语料库 |
| distributed |
/dɪˈstrɪbjuːtɪd/ |
分布式的 |
| parse |
/pɑrs/ |
解析 |
| discern |
/dɪˈsɜrn/ |
辨别 |
| playlist |
/ˈpleɪˌlɪst/ |
播放列表 |
| graph |
/ɡræf/ |
图 |
Artificial Intelligence in the Palm of Your Hand (6ページ)
原文(げんぶん)
Artificial intelligence techniques are increasingly showing up in smartphone applications. For example, Google has developed Google Goggles, a smartphone application providing a visual search engine. Just take a picture of a book, landmark, or sign using a smartphone's camera and Goggles will perform image processing, image analysis, and text recognition, and then initiate a Web search to identify the object. If you are an English speaker visiting in France, you can take a picture of a sign, menu, or other text and have it translated to English. Beyond Goggles, Google is actively working on voice-to-voice language translation. Soon you will be able to speak English into your phone and have your words spoken in Spanish, Chinese, or another language. Smartphones will undoubtedly get smarter as AI continues to be utilized in innovative ways.
翻訳文(へんやくもん)
智能手机应用中逐渐展现出了越来越多的人工智能技术。例如,谷歌研发了 Google Goggles,它是一个提供视觉搜索引擎的智能手机应用。只要用智能手机的摄像头拍摄一本书、某一地标或某一标记,Google Goggles 就会执行图像处理、图像分析以及文本识别,然后启动 Web 搜索来识别对象。如果讲英语的你正处在法国,你可以拍摄一张地标、菜单或其他文本的照片,然后 Google Goggles 会将其翻译为英文。除了 Google Goggles 以外,谷歌正在积极地研究声音对声音的语言翻译,很快你就可以用英语对着手机说话,然后让手机将其用西班牙语、中文或其他语言翻译出来。随着不断以创新的方式使用 AI,智能手机无疑会越来越智能。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Physical Agents (11ページ)
原文(げんぶん)
A physical agent (robot) is a programmable system that can be used to perform a variety of tasks. Simple robots can be used in manufacturing to do routine jobs such as assembling, welding, or painting. Some organizations use mobile robots that do delivery jobs such as distributing mail or correspondence to different rooms. There are mobile robots that are used underwater for prospecting for oil.
A humanoid robot is an autonomous mobile robot that is supposed to behave like a human. Although humanoid robots are prevalent in science fiction, there is still a lot of work to do before such robots will be able to interact properly with their surroundings and learn from events that occur there.
翻訳文(へんやくもん)
物理智能体(机器人)是一个用来完成各项任务的可编程系统。简单的机器人可以用在制造行业,从事一些日常的工作,如装配、焊接或油漆。有些组织使用移动机器人去做一些日常的分发工作,如分发邮件或信件到不同的房间。还有移动机器人可以在水下探测石油。
人形机器人是一种自治的移动机器人,它模仿人类的行为。虽然人形机器人在科幻小说中很流行,但是要使这种机器人能够合理地与周围环境交互并从环境中发生的事件中学习,还有很多工作要做。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Reasoning and Logic (26ページ)
原文(げんぶん)
Reasoning is the action of constructing thoughts into a valid argument. This is something you probably do every day. When you make a decision, you are using reasoning, taking different thoughts and making those thoughts into reasons why you should go with one option over the other options available. When you construct an argument, that argument will be either valid or invalid. A valid argument is reasoning that is comprehensive on the foundation of logic or fact.
Inductive and deductive reasoning are both forms of propositional logic. Propositional logic is the branch of logic that studies ways of joining and/or modifying entire propositions, statements or sentences to form more complicated propositions, statements or sentences. Inductive and deductive reasoning use propositional logic to develop valid arguments based on fact and reasoning. Both types of reasoning have a premise and a conclusion. How each type of reasoning gets to the conclusion is different.
翻訳文(へんやくもん)
推理是把思想构造成有效论点的行为。这可能是你每天都要做的事。当你作决定时,你在推理并构思不同的想法,然后把这些想法变成为什么你应该作出某个选择和非其他选择的理由。构造论点时,该论点可能是有效的也可能是无效的。-个有效的论点是在逻辑或事实基础上综合的推理。
归纳推理和演绎推理是命题逻辑的两种形式。命题逻辑是逻辑学的- -个分支,它研究如何连接和/或修改整个命题、语句或句子,以形成更复杂的命题、语句或句子。归纳推理和演绎推理都使用命题逻辑来组织基于事实和推理的有效论据。这两种推理都有前提和结论,每种类型的推理如何得出结论的过程是不同的。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Semantic Network(30ページ)
原文(げんぶん)
Semantic network or semantic net was proposed by Quillian in 1967 in order to represent the knowledge in a form of graph. Semantic network is a technique of knowledge representation that is used for propositional information, and sometimes called a propositional net. In knowledge representation the semantic networks are two dimensional. In terms of mathematics a semantic network is defined as a labeled directed graph. The semantic network is composed of links, nodes and link labels. In the diagram the semantic network nodes are described as ellipses,circles or rectangles to show objects such as physical objects, situations or concepts. The links can be used to express the relationships between objects. A particular relation is specified by link labels. The basic structure of knowledge organizing is provided by relationships.
翻訳文(へんやくもん)
语义网络( Semantic network,简称语义网)是Quillian 于1967 年提出的一种以图形形式表示知识的网络。语义网络是一种用于命题信息的知识表示技术,有时也称为命题网络。在知识表示中,语义网络是二维的。在数学上,语义网络被定义为带有标签的有向图。语义网络由链接、节点和链接标签组成。在图中,语义网络节点被描述为椭圆圆或矩形,以显示物理对象、情景或概念等对象。链接可用于表示对象之间的关系。特定关系由链接标签指定。知识组织的基本结构是由关系组成的。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Non-monotonic Logic (45ページ)
原文(げんぶん)
Everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worthwhile or even necessary (for example, in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible, that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of abduction, inference to the best explanation, and so on). More recently, logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic, and artificial intelligence.
翻訳文(へんやくもん)
日常推理大多是非单调的,因为它涉及风险,即我们会在演绎不足的前提下得出结论。我们知道什么时候冒这个险是值得的,甚至是必要的(例如,在医学诊断中)。然而,我们也意识到这样的推论是“可废止的”,因为新的信息可能会破坏旧的结论。传统上,各种可废止的但非常成功的推论(归纳理论、皮尔斯的诱拐理论、最佳解释的推论等)引起了哲学家们的注意。最近,逻辑学家开始从形式的角度来探讨这一现象,并在哲学、逻辑和人工智能的界面上形成了一个庞大的理论体系。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Probabilistic Reasoning(50ページ)
原文(げんぶん)
Probabilistic reasoning is a way of knowledge representation where we apply the concept of probability to indicate the uncertainty in knowledge. In probabilistic reasoning, we combine probability theory with logic to handle the uncertainty.
We use probability in probabilistic reasoning because it provides a way to handle the uncertainty that is the result of someone's laziness and ignorance. In the real world, there are lots of scenarios where the certainty of something is not confirmed, such as "It will rain today," "the behavior of someone in some situations," or "a match between two teams or two players." These are probable sentences for which we can assume that it will happen but not sure about it, so here we use probabilistic reasoning.
翻訳文(へんやくもん)
概率推理是应用概率概念来表示知识不确定性的一种知识表示方法。在概率推理中,我们将概率论与逻辑相结合来处理不确定性。
我们在概率推理中使用概率,是因为它提供了一种方法来处理由某人的懒惰或无知造成的不确定性。在现实世界中,有很多情况下,某些事情的确定性是不确定的,比如“今天会下雨”“某个人在某些情况下的行为”“两队或两名球员之间的比赛”。这些都是不确定的描述,我们可以假设它会发生,但并不确定,所以此时我们使用概率推理。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Travelling Salesman Problem(63ページ)
原文(げんぶん)
A salesman wants to travel to N cities (he should pass by each city). How can we order the cities so that the salesman's journey will be the shortest? The objective function to minimize here is the length of the journey (the sum of the distances between all the cities in a specified order).
To start solving this problem, we need:
(1)Configuration setting: This is the permutation of the cities from 1 to N, given in all orders. Selecting an optimal one between these permutations is our aim.
(2)Rearrangement strategy: The strategy that we will follow here is replacing sections of the path and replacing them with random ones to retest if this modified one is optimal or not.
(3)The objective function (which is the aim of the minimization): This is the sum of the distances between all the cities for a specific order.
翻訳文(へんやくもん)
一名推销员想到访N个城市(需要经过每个城市)。我们怎样排列到访顺序才能使推销员的行程最短?此处需要最小化的目标函数是行程的长度(所有城市之间的距离按指定顺序的总和)。
要解决这个问题,需要:
(1) 配置设置:城市从1到N的所有可能排列。我们的目标是在这些排列中选择一个最佳排列。
(2) 重新安排策略:我们将遵循的策略是用随机路径替换部分路径,然后重新测试这个修改后的路径是否是最优的。
(3) 目标函数(这是最小化的目标):某个特定顺序的所有城市之间距离的总和。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Evolutionary Programming(67ページ)
原文(げんぶん)
When applied to the task of program development, the genetic algorithm approach is known as evolutionary programming. Here the goal is to develop programs by allowing them to evolve rather than by explicitly writing them. Researchers have applied evolutionary programming techniques to the program development process using functional programming languages. The approach has been to start with a collection of programs that contain a rich variety of functions. The functions in this starting collection form the "gene pool" from which future generations of programs will be constructed. One then allows the evolutionary process to run for many generations, hoping that by producing each generation from the best performers in the previous generation, a solution to the target problem will evolve.
翻訳文(へんやくもん)
当应用于程序开发时,使用遗传算法的方法称为进化规划(evolutionary programming)。此时,我们的目标是通过模拟进化过程开发程序,而不是直接编写程序。研究人员已经用函数式程序设计语言将进化规划技术应用于程序开发过程。该方法首先创建了一个包含各种函数的程序集合。初始集合中的函数构成了“基因池”,而之后的各代程序将通过“基因池”来构建。接下来,我们允许进化过程执行很多代,期望每次通过上一代中的最佳组合生成新的一代,从而让目标问题的解决方案逐步进化。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Support Vector Machine (SVM)(84ページ)
原文(げんぶん)
A support vector machine is a supervised learning algorithm that sorts data into two categories. It is trained with a series of data already classified into two categories, building the model as it is initially trained. The task of an SVM algorithm is to determine which category a new data point belongs to. This makes SVM a kind of non-binary linear classifier.
An SVM algorithm should not only place objects into categories, but have the margins between them on a graph as wide as possible.
翻訳文(へんやくもん)
支持向量机是一种有监督的学习算法,它将数据分为两类。它在最初训练时建立模型,并使用一系列已经分为两类的数据进行训练。支持向量机算法的任务是确定一个新的数据点应归入哪一类。这使得支持向量机成为一种非二元性的线性分类器。
支持向量机算法不仅要将对象分类,而且要使图中的类别间的边界尽可能宽。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Ensemble Learning(90ページ)
原文(げんぶん)
Many ensemble learning tools can be trained to produce various results. Individual algorithms may be stacked on top of each other, or rely on a bucket of models method of evaluating multiple methods for one system. In some cases, multiple data sets are aggregated and combined. For example, a geographic research program may use multiple methods to assess the prevalence of items in a geographic space. One of the issues with this type of research involves making sure that various models are independent, and that the combination of data is practical and works in a particular scenario.
Ensemble learning methods are included in different types of statistical software packages. Some experts describe ensemble learning as crowdsourcing of data aggregation.
翻訳文(へんやくもん)
可以通过训练许多集成学习工具以产生多种结果。单个的算法可以相互叠加,也可以依赖于一个系统评估多个方法的“模型桶”方法。在某些情况下,可以聚合或组合多个数据集。例如,地理研究项目可以使用多种方法来评估某种物质在地理空间中的分布。这类研究的一个问题是需要确保各种模型是独立的,并且数据的组合是实用的,还需要能在特定的情境中使用。
不同类型的统计软件包中都有集成学习方法。一些专家将集成学习描述为数据聚合的“众包”(crowdsourcing)。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Non-Linear Activation Function(104ページ)
原文(げんぶん)
Activation functions are any functions that define the output of a neuron. Theactivation function associated with each neurons in a neural network determines whetherit should be activated or not , based on the output of that function. There are three typesof activation functions - - Binary , Linear and Non-Linear activation function.
Input to the neural network is usually linear transformation (i.e. input * weight +bias).but most of the real world data are non-linear. So,to make that input non-linear,non-linear activation functions are used. Non-linear Activation is the functions that addnon-linearity into the network.
翻訳文(へんやくもん)
激活函数是定义神经元输出的任意函数。与神经网络中每个神经元相关联的激活函数根据该函数的输出来决定是否应该激活它。激活函数有三种类型:二元激活函数、线性激活函数和非线性激活函数。神经网络的输人通常是线性变换(即输人X权重+偏差),但大多数实际数据都是非线性的。因此,为了使输人非线性,使用了非线性激活函数。非线性激活是在网络中加入非线性的函数。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Feedforward Neural Network(108ページ)
原文(げんぶん)
The feedforward neural network,as a primary example of neural network design,has a limited architecture. Signals go from an input layer to additional layers. Someexamples of feedforward designs are even simpler. For example,a single-layer perceptronmodel has only one layer,with a feedforward signal moving from a layer to an individualnode. Multi-layer perceptron models, with more layers,are also feedforward.
In the days since scientists devised the first artificial neural networks, the technologyworld has made all sorts of progress in building more sophisticated models. There arerecurrent neural networks and other designs that contain loops or cycles. There aremodels that involve backpropagation, where the machine learning system essentiallyoptimizes by sending data back through a system. The feedforwardneural network doesnot involve any of this type of design,and so it is a unique type of system that is good for learning these designs for the first time.
翻訳文(へんやくもん)
前馈神经网络作为神经网络设计的一个主要例子,其结构是有限的。信号从一个输人层传到另一层。一些前馈网络的结构甚至更简单。例如,单层感知器模型只有一层,前馈信号从一层移动到单个节点。具有更多层的多层感知器模型,也是前馈的。
自从科学家发明了第一个人工神经网络以来,科技界在建立更复杂的模型方面取得了各种进展:有递归神经网络和其他包含循环的设计,还有些模型涉及反向传播,反向传播模型中的机器学习系统本质上是通过系统发送回数据来优化的。前馈神经网络不涉及任何此类设计,因此它是一种独特的 系统类型,有利于首次学习这些设计。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
LSTM Networks(123ページ)
原文(げんぶん)
Long Short Term Memory networks—usually just called LSTMs—are a special kind of RNN, capable of learning long-term dependencies. They were introduced by Hochreiter & Schmidhuber (1997), and were refined and popularized by many people in following work. They work tremendously well on a large variety of problems, and are now widely used.
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior, not something they struggle to learn!
翻訳文(へんやくもん)
长短期记忆网络通常被称为LSTM,是一种特殊的RNN,它能够学习长期依赖关系。它由Hochreiter & Schmidhuber(1997)提出,并在随后的工作中被许多人改进和推广。它们在各种各样的问题上表现得非常好,因此现在被广泛使用。
LSTM是为了避免长期依赖性问题而专门设计的。记住长期信息实际上是这种模型的默认行为,而无须专门学习。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Regularization(127ページ)
原文(げんぶん)
Regularization is a way to avoid overfitting by penalizing high-valued regression coefficients. In simple terms, it reduces parameters and shrinks (simplifies) the model. This more streamlined, more parsimonious model will likely perform better at predictions. Regularization adds penalties to more complex models and then sorts potential models from least overfit to greatest; the model with the lowest overfitting score is usually the best choice for predictive power.
翻訳文(へんやくもん)
正则化是通过惩罚高值回归系数来避免过度拟合的一种方法。简单地说,它减少了参数并缩小(简化)了模型。这种更新型的、更精简的模型可能在预测方面表现更好。正则化可将惩罚加到更复杂的模型中,然后将潜在模型按照过拟合程度从小到大排序,过拟合程度最低的模型通常具有最佳的预测能力。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Markov Decision Process(143ページ)
原文(げんぶん)
Reinforcement Learning is a type of Machine Learning. It allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize their performance. Simple reward feedback is required for the agent to learn its behavior; this is known as the reinforcement signal.
There are many different algorithms that tackle this issue. As a matter of fact, Reinforcement Learning is defined by a specific type of problem, and all its solutions are classed as Reinforcement Learning algorithms. In the problem, an agent is supposed to decide the best action to select based on its current state. When this step is repeated, the problem is known as a Markov Decision Process.
翻訳文(へんやくもん)
强化学习是机器学习的一种。它允许机器和软件代理自动确定在特定上下文中的理想行为,从而最大限度地提高其性能。简单的奖励反馈是代理学习其行为所必需的,被称为强化信号。
有许多不同的算法可以解决这个问题。事实上,强化学习是由一类特定的问题定义的,它的所有解都被归类为强化学习算法。在这类问题中,代理应该根据其当前状态来决定要选择的最佳操作。当这一步骤重复时,这类问题被称为马尔可夫决策过程。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Why "Deep" Q-Learning?(147ページ)
原文(げんぶん)
Q-learning is a simple yet quite powerful algorithm to create a cheat sheet for our agent. This helps the agent figure out exactly which action to perform.
But what if this cheat sheet is too long? Imagine an environment with 10,000 states and 1,000 actions per state. This would create a table of 10 million cells. Things will quickly get out of control!
It is pretty clear that we can't infer the Q-value of new states from already explored states. This presents two problems:
First, the amount of memory required to save and update that table would increase as the number of states increases.
Second, the amount of time required to explore each state to create the required Q-table would be unrealistic.
Here's a thought—what if we approximate these Q-values with machine learning models such as a neural network? Well, this was the idea behind DeepMind's algorithm that led to its acquisition by Google for 500 million dollars!
翻訳文(へんやくもん)
什么是“深度”Q-learning? Q-learning是一个简单但功能强大的算法,可以为我们的代理创建一个备忘录,从而有助于代理确定要执行的操作。
但是如果这个备忘录太长呢?假设一个环境有 10000个状态,每个状态有1000个操作。这将创建一个包含1000万个单元格的表。事态很快就会失控!
很明显,我们不能从已经探索的状态中推断出新状态的Q值。这带来了两个问题: 首先,存储和更新该表所需的内存量将随着状态数的增加而增加。
其次,探索每个状态创建所需Q表需要的时间是过于庞大的。
一个想法应运而生:如果我们用机器学习模型(比如神经网络)来估计这些Q值呢?这正是DeepMind算法背后的想法,导致它以5亿美元被谷歌收购。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Pattern Recognition(162ページ)
原文(げんぶん)
Pattern recognition is the process of recognizing patterns by using machine learning algorithms. Pattern recognition can be defined as the classification of data based on knowledge already gained or on statistical information extracted from patterns and/or their representation. One of the important aspects of pattern recognition is its application potential.
In a typical pattern recognition application, the raw data is processed and converted into a form that is amenable for a machine to use. Pattern recognition involves classification and clustering of patterns.
翻訳文(へんやくもん)
模式识别是利用机器学习算法进行识别模式的过程。模式识别可以定义为基于已经获得的知识或基于从模式及其表示中提取的统计信息的数据分类。模式识别的一个重要方面是它的应用潜力。
在典型的模式识别应用中,原始数据被处理并转换成一种适合机器使用的形式。模式识别涉及模式的分类和聚类。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Image Segmentation in Deep Learning(166ページ)
原文(げんぶん)
Many computer vision tasks require intelligent segmentation of an image, to understand what is in the image and enable easier analysis of each part. Today's image segmentation techniques use models of deep learning for computer vision to understand, at a level unimaginable only a decade ago, exactly which real-world object is represented by each pixel of an image.
Deep learning can learn patterns in visual inputs in order to predict object classes that make up an image. The main deep learning architecture used for image processing is a Convolutional Neural Network (CNN), or specific CNN frameworks like AlexNet, VGG, Inception, and ResNet. Models of deep learning for computer vision are typically trained and executed on specialized Graphics Processing Units (GPUs) to reduce computation time.
翻訳文(へんやくもん)
许多计算机视觉任务需要对图像进行智能分割,以了解图像中的内容,并使对每个部分的分析更容易。现代的图像分割技术使用计算机视觉的深度学习模型,以十年前难以想象的水平,准确地理解图像的每个像素代表了哪个真实世界的对象。
深度学习可以学习视觉输入中的模式,以便预测构成图像的对象类。用于图像处理的主要深度学习模型结构是卷积神经网络(CNN),或特定的CNN框架,如AlexNet、VGG、Inception和ResNet。计算机视觉的深度学习模型通常在专门的图形处理单元(GPU)上进行训练和运行,以减少计算时间。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Machine Translation(180ページ)
原文(げんぶん)
Machine translation systems are applications or online services that use machine-learning technologies to translate large amounts of text from and to any of their supported languages. The service translates a "source" text from one language to a different "target" language.
Although the concepts behind machine translation technology and the interfaces to use it are relatively simple, the science and technologies behind it are extremely complex and bring together several leading-edge technologies, in particular, deep learning (artificial intelligence), big data, linguistics, cloud computing, and Web APIs.
翻訳文(へんやくもん)
机器翻译系统是类应用程序或在线服务,使用机器学习技术将大量文本从其支持的语言中翻译成另一种支持的语言。该服务将“源”文本从一种语言翻译成另一种“目标”语言。
尽管机器翻译技术背后的概念和使用它的接口相对简单,但背后的科学和技术却极其复杂。它汇集了一些前沿技术,特别是深度学习(人工智能)、大数据、语言学、云计算和Web API。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)
Word2vec(184ページ)
原文(げんぶん)
Word2vec is a two-layer neural net that processes text. Its input is a text corpus and its output is a set of vectors: feature vectors for words in that corpus. While Word2vec is not a deep neural network, it turns text into a numerical form that deep nets can understand. Deeplearning4j implements a distributed form of Word2vec for Java and Scala, which works on Spark with GPUs.
Word2vec's applications extend beyond parsing sentences in the wild. It can be applied just as well to genes, code, likes, playlists, social media graphs, and other verbal or symbolic series in which patterns may be discerned.
翻訳文(へんやくもん)
Word2vec 是一个处理文本的双层神经网络。它的输入是一个文本语料库,输出是一组向量,即该语料库中单词的特征向量。虽然 Word2vec 不是一种深度神经网络,但它可将文本转换为深度网络可以理解的数值形式。Deeplearning4j 为 Java 和 Scala 实现了 Word2vec 的一种分布式形式,它在 Spark 上通过 GPU 运行。
Word2vec 的应用不仅仅是解析句子。它也可以应用于基因、代码、喜好、播放列表、社交媒体图和其他可以识别模式的语言或符号序列。
単語(たんご)
初心者(しょしんしゃ)
上級者(じょうきゅうしゃ)