日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

元学习Meta-Learning

發(fā)布時(shí)間:2025/4/5 编程问答 15 豆豆
生活随笔 收集整理的這篇文章主要介紹了 元学习Meta-Learning 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

目錄

?

1. 背景

2. 元學(xué)習(xí)meta-learning

3. 應(yīng)用

3.1 事件抽取(Zero-shot Transfer Learning for Event Extraction)


1. 背景

Artificial Intelligence --> Machine Learning --> Deep Learning --> Deep Reinforcement Learning --> Deep Meta Learning

在Machine Learning時(shí)代,復(fù)雜一點(diǎn)的分類問題效果就不好了,Deep Learning深度學(xué)習(xí)的出現(xiàn)基本上解決了一對一映射的問題,比如說圖像分類,一個(gè)輸入對一個(gè)輸出,因此出現(xiàn)了AlexNet這樣的里程碑式的成果。但如果輸出對下一個(gè)輸入還有影響呢?也就是sequential decision making的問題,單一的深度學(xué)習(xí)就解決不了了,這個(gè)時(shí)候Reinforcement Learning增強(qiáng)學(xué)習(xí)就出來了,Deep Learning + Reinforcement Learning = Deep Reinforcement Learning深度增強(qiáng)學(xué)習(xí)。有了深度增強(qiáng)學(xué)習(xí),序列決策初步取得成效,因此,出現(xiàn)了AlphaGo這樣的里程碑式的成果。但是,

  • 深度增強(qiáng)學(xué)習(xí)太依賴于巨量的訓(xùn)練,并且需要精確的Reward,對于現(xiàn)實(shí)世界的很多問題,比如機(jī)器人學(xué)習(xí),沒有好的reward,也沒辦法無限量訓(xùn)練,怎么辦?
  • 或者把棋盤變大一點(diǎn)AlphaGo還能行嗎?目前的方法顯然不行,AlphaGo會(huì)立馬變成傻瓜。而我們?nèi)祟惥蛥柡Χ嗔?#xff0c;分分鐘可以適應(yīng)新的棋盤。

再舉個(gè)例子人臉識別,我們?nèi)送梢灾豢匆幻婢湍苡涀〔⒆R別,而現(xiàn)在的深度學(xué)習(xí)卻需要成千上萬的圖片才能做到。

我們?nèi)祟悡碛械目焖賹W(xué)習(xí)能力是目前人工智能所不具備的,而人類之所以能夠快速學(xué)習(xí)的關(guān)鍵是人類具備學(xué)會(huì)學(xué)習(xí)的能力,能夠充分的利用以往的知識經(jīng)驗(yàn)來指導(dǎo)新任務(wù)的學(xué)習(xí)。因此,如何讓人工智能能夠具備快速學(xué)習(xí)的能力成為現(xiàn)在的前沿研究問題,namely?Meta Learning.

problem: deep learning依賴大量優(yōu)質(zhì)的標(biāo)柱訓(xùn)練數(shù)據(jù)集 and 計(jì)算資源;移植能力差poor?portability/?p??.t??b?l.?.ti/ n.;task-specific獨(dú)立應(yīng)用于特定任務(wù),but?new concept or things come out continuously。

Reference:

[1]?https://zhuanlan.zhihu.com/p/27629294?===>作者將人的感性通過weight價(jià)值觀網(wǎng)絡(luò)體現(xiàn)出來,很有想法,很有意思的一個(gè)點(diǎn)!

[2]?https://blog.csdn.net/langb2014/article/details/84953307

2. 元學(xué)習(xí)meta-learning

solution:快速學(xué)習(xí);利用以往的知識經(jīng)驗(yàn)來知道新任務(wù)學(xué)習(xí);learning to learn;inference,思考

概念:元學(xué)習(xí), meta-learning, known as learning to learn(Schmidhuber,?1987;?Bengio et al.,?1991;?Thrun and Pratt,?1998)), ?is an alternative paradigm that draws on past experience in order to learn and adapt to new tasks quickly: the model is trained on a number of related tasks such that it can solve unseen tasks using only a small number of training examples.

3. 應(yīng)用

3.1 事件抽取(Zero-shot Transfer Learning for Event Extraction)

  • Problem: Most previous event extraction studies have relied heavily on features derived from annotated event mentions, thus can not be applied to new event types without annotation /ty??n.??te?.??n/ n. 標(biāo)注?effort.
  • Solution: We designed a transferable neural architecture, mapping event mentions and types jointly into a shared semantic space using structural and compositional neural networks, where the type of each evnet mention can be determined by the closest of all candidate types.
  • Scheme: By leveraging (1) available manual annotation for a small set of existing event types and (2) existing event ontologies?/?n?t?l.?.d?i/ n. 本體, our framework applies to new event types without requiring additional annotation.

(1) goal of event extraction: event triggers; event arguments from unstructural data.

--->poor portability /?p??.t??b?l.?.ti/ n. 可移植性of traditional supervised methods and the limited coverage of available event annotations.

--->problem: handling new event types means to start from scratch without being able to re-use annotations for old event types.

? ? ? ?reasons: thest approaches modeled event extraction as a classification problem, encoding features only by measuring the similarity between rich features encoded for test event mentions and annotated event mentions.

--->We observed that both event mentions and types can be represented with structures.

? ? ? ?event mention structure <--- constructed from trigger and candidate arguments

? ? ? ?event type structure <--- consists of event type and predefined roles?

---> Figure 2.

Figure 2: Examples of Event Mention and Type Structures from ERE.

? ? ? ?AMR --> abstract meaning representation, to identify candidate arguments and construct event mention structures.

? ? ? ?ERE --> entity relation event, event types can also be represented with structures form ERE.

? ? ? ? ? ? ? ? ? ? ?besides the lexical semantics that relates a trigger to its type, their structures also tend to ben similar.

? ? ? ? ? ? ? ? ? ? ?this observation is similar to the theory that the semantics of an event structure can be generalized and mapped to event mention structures in semantic and predictable way.

? ? ? ? ? ? ? ? ? ? ?event extraction task --> by mapping each mention to its semantically closest event type in the ontology.

---> one possible implementation: Zero-Shot Learning(ZSL), which had been successfully exploited in visual object classification.

? ? ? ? main idea of ZSL for vision tasks: is to represent both images and type labels in a multi-dimensional vector space separately, and then learn a regression model to map from image semantic space to type label semantic space based on annotated images for seen labels. This regression model can be further used to predict the unseen labels of any given image.

---> one goal is to effectively transfer the knowledge of events from seen types to unseen types, so we can extract event mentions of any types defined in the ontology.

? ? ? ? We design a transferable neural architecture, which jointly learns and maps the structural representation of both event mentions and types into a shared semantic space by minimizing the distance between each event mention and its corresponding type.

? ? ? ? ?unseen types' event mentions, their structures will be projected into the same semantic space using the same framework and assigned types with top-ranked similarity values.

(2) Approach

Event Extraction: triggers; arguments

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Figure 3: Architecture Overview

? 1) a sentence S, start by identifying candidate triggers and arguments based on AMR parsing.

? ? e.g. dispatching is the trigger of a Transport_Person event with four arguments(0, China; 1, troops; 2, Himalayas; 3, time)

?? ?we build a structure St using AMR as shown in Figure 3. e.g. dispatch-01

? 2) each structure is composed of a set of tuples, e.g. <dispatch-01, :ARG0, China>

?

?

? ? we use a matrix to represent each AMR relation, composing its semantics with two concepts for each tuple, and feed all tuple representations into CNN to generate event mention structure representation Vst(namely candidate trigger).

? ? ? ? ? ? ? ? ? ? ? ? ? ?---->?pooling & concatenation. --> Vst

? ? ?Shared CNN----> Convolution Layer

? ? ? ? ? ? ? ? ? ? ? ? ? ----> Structure Composition Layer ?<--St

? 3) Given a target event ontology, for each type y, e.g. Transport_Person, we construct a type structure Sy by incorporating its predefined roles, and use a tensor to denote the implicit relation between any types and arguments.

? ? compose the semantics of type and argument role with the tensor for each tuple, e.g. <Tranport_Person, Destination>

? ? we generate the event type structure representation Vsy using the same CNN.

? 4) By minimizing the semantic distance between dispatch-01 and Transport_Person Vst and Vsy.

? ? ?we jointly map the representations of event mention and event types into a shared semantic space, where each mention is closest to its annotated type.

? 5) After training, the compositional functions and CNNs can be further used to project any new event mention(e.g. donate-01) into the semantic space and find its closest event type()

(3) Joint Event Mention and Type Label Embedding

? ? CNN is good at capture sentence level information in various NLP tasks.

? ? --> we use it to generate structure-label representations.

? ? ? ? ? ?For each event mention structure St=(u1,u2,..., un) and each event type structure Sy=(u1', u2', ...., up') which contains h and p tuples respectively.

? ? --> we apply a weight-sharing CNN to each input structure to jointly learn event mention and type structural representations, which will be later used to learn the ranking function for zero-shot event extraction.

? ? --> Input layer is a sequence of tuples, where the order of tuples is represented by a d * 2 dimensional vector, thus each mention structure and each type stucture are represented as a feature map of dimensionality d x 2h* and d x 2p* respectively.

? ? --> Convolution Layer

? ? --> Max-Pooling

? ? --> Learning

(4) Joint Event Argument and Role Embedding

(5) Zero-Shot Classification?

?

?

?

?

總結(jié)

以上是生活随笔為你收集整理的元学习Meta-Learning的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 亚洲av第一成肉网 | 少妇无码一区二区三区 | 婷婷tv| 精品视频一二三区 | 青青青青青青草 | 性欧美丰满熟妇xxxx性 | 成人久久影院 | 色在线视频观看 | 亚洲国产精品视频一区二区 | 嫩草网站在线观看 | 调教丰满的已婚少妇在线观看 | 精品自拍av | 国产精品一区二区无线 | 哈利波特3在线观看免费版英文版 | 97超碰资源总站 | 欧美狠狠爱 | 波多野结衣中文字幕一区二区三区 | 91狠狠干 | 青青草黄色| 亚洲av无码国产精品色午夜 | 农村妇女毛片精品久久久 | 久久福利电影 | 黄色日韩| 午夜不卡视频 | 成人免费av片 | 午夜免费激情视频 | 欧美色图激情小说 | 亚洲国产视频一区二区 | 波多野吉衣视频在线观看 | 三级4级全黄60分钟 成人自拍视频 | 影音先锋中文字幕第一页 | 97福利网 | 福利久久久 | 小黄网站在线观看 | 久久99精品国产.久久久久 | 成人午夜视频在线观看 | 亚洲天堂8| 黄片毛片在线免费观看 | 超碰在线资源 | 国产在线一二区 | 波多野结衣加勒比 | 国产黄频 | 国产精品久久久久无码av色戒 | 久久久久久久一区二区三区 | 中文字幕在线日亚洲9 | 丰满少妇一区二区三区视频 | 日本 奴役 捆绑 受虐狂xxxx | 人妻激情偷乱视频一区二区三区 | 色播欧美 | 奇米影视av| 欧美中文| 五月深爱婷婷 | 亚洲国产成人va在线观看天堂 | 久久精品屋 | 懂色a v | 免费在线国产视频 | 久久妇女 | 精品视频一二三区 | 老色批网站| 亚洲精品午夜 | 欧美中文字幕在线观看 | 精品熟妇视频一区二区三区 | 国内爆初菊对白视频 | 精品人妻伦一二三区久久 | 欧美精选一区二区 | 国产稀缺真实呦乱在线 | 欧美日韩国产一级 | 亚洲AV不卡无码一区二区三区 | 久久精品人人爽 | av毛片在线 | 欧州一区 | 国产成人av免费观看 | 日日夜夜精品视频免费 | 一二三四区视频 | 99re在线视频 | 日本少妇性生活 | 四虎影视www在线播放 | 性一交一黄一片 | 国产精品久久久久久久免费看 | 国产精品aaa | av小说区| 欧美三个黑人玩3p | 起碰在线 | 青青插| 国产精品欧美精品 | 美女扒开腿让人桶爽 | 日韩欧美亚洲视频 | 中文字幕日韩精品无码内射 | 香蕉视频网站入口 | 美腿丝袜亚洲色图 | 一区二区三区激情视频 | 免费黄色一级 | 久久久www成人免费无遮挡大片 | 男男做爰猛烈叫床爽爽小说 | 射黄视频 | 国产精品三区四区 | 婷婷久久久久 | 欧美激情爱爱 | 成人wwwww免费观看 |