日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

仿生计算(参考神经网络)2017年考试卷子,考前抱佛脚必备!!中英翻译版本!!

發(fā)布時(shí)間:2025/3/18 编程问答 42 豆豆
生活随笔 收集整理的這篇文章主要介紹了 仿生计算(参考神经网络)2017年考试卷子,考前抱佛脚必备!!中英翻译版本!! 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

如果需要鏈接,自己打印做

請(qǐng)參考

https://student.csc.liv.ac.uk/internal/exams/papers/Jan2018/COMP305.pdf

PAPER CODE NO.

COMP 305

EXAMINER : Dr Irina V. Biktasheva DEPARTMENT : Computer Science Tel. No. 54267

First Semester Examinations 2017/

Biocomputation

TIME ALLOWED: Two and a Half Hours

INSTRUCTIONS TO CANDIDATES

Answer FOUR questions.

If you attempt to answer more questions than the required number of

questions, the marks awarded for the excess questions answered will be

discarded (starting with the lowest mark).

Each question is worth 25 marks

1 History and Concepts.

1(a) Why are biology inspired Artificial Neural Networks and Genetic Algorithms
now considered part of Computer Science, not Computational Biology?
[2 marks]

1(b) What general problems are solved by Artificial Neural Networks? Give a
couple of examples.
[4 marks]

1? What problems can be solved by Genetic Algorithms? Give a couple of
examples.
[4 marks]

1(d) Large numbers of academic texts are stored and made available in online
repositories. For older texts, it is necessary to convert from a paper document to
an electronic document. An Optical Character Recognition (OCR) system can
be used for that purpose.

i) What kind of problem is solved by the OCR system? [3 marks] ii) What are the inputs and outputs of the OCR system? Illustrate your answer with an example. [5 marks] iii) Why does such a system require supervised learning? [3 marks] iv) What data would be used to train an Artificial Neural Network for this task? [4 marks]

2 The McCulloch-Pitts neuron.

2 (a) Draw a flow chart for the McCulloch-Pitts neuron (MP-neuron) algorithm that
is used to compute an output in response to a particular input.
NB. Assume that all weights of connections and the neuron threshold are set up
in advance.
[8 marks]

2 (b) Draw a diagram and explain the workings of an MP-neuron realisation of an
“OR” logical gate.
[4 marks]

2? Draw a diagram and explain the workings of an MP-neuron realisation of a
“NOT” logical gate.
[5 marks]

2(d) Apply your answers for the parts (a-c) of this question to deduce an output X of

the MP-neuron network below in response to the input a 1 =0, a 2 =1, a 3 =.

[8 marks]

3 Learning rules of the Artificial Neural Networks**. Hebb’s Rule.**

3(a) What is a learning rule of an artificial neural network? [3 marks]

3(b) Give the simplest mathematical formulation of Hebb’s learning rule. Explain
how to compute a correction to the weight of a connection according to the
instant input and output. [4 marks]

3? Why is the Hebb’s rule called “activity product rule”? [1 mark]

3(d) Why does the Hebb’s rule represent unsupervised learning? [2 marks]

3(e) The neural network below uses Hebb’s learning rule.

Let the initial weights of connections at the time step t=1 be w 1 t=1 = 1, w 2 t=1 = 0 , w 3 t=1 = - 1; the learning rate C of the network be 0.25, that is, C = 0.25. Complete the following table Time step a 1 t^ a 2 t^ a 3 t^ w 1 t w 2 t w 3 t Xt ?w 1 t ?w 2 t ?w 3 t w 1 t+1 w 2 t+1 w 3 t+ t=1 1 1 1 1 0 - 1 t=2 1 1 0 t=3 1 0 0 by calculating ? the network output value Xt, ? the changes in each of the three weights of connections ?w 1 t, ?w 2 t, and ?w 3 t , ? the new weights (wnt+1) i) at the time step t=1 [5 marks] ii) at the time step t=2 [5 marks] iii) at the time step t=3 [5 marks]
  • Supervised learning. Perceptron.
  • 4(a) Describe the two-layer fully interconnected architecture of a Perceptron. What is
    a bias input unit?
    [3 marks]

    4(b) What is the Perceptron training set? How is it used during the error-correction
    training of the Perceptron? How is an output unit’s error computed and used to
    define corrections to the Perceptron weights of connections (i.e. what is the
    Perceptron learning rule)?
    [7 marks]

    4? A perceptron can compute only linear separable functions, that is, the functions
    for which the points of the input space with function value (output) of “0” can
    be separated from the points with function value of “1” using a line.

    Using a coordinate plane for inputs a 1 and a 2 show that the “IDENTITY” gate (see the table below) is a linear inseparable function.

    a 1 a 2 “IDENTITY”

    1 1 1

    1 0 0

    0 1 0

    0 0 1

    Explain your answer. [4 marks] (to be continued)

    4(d) The 3-layer network shown below implements the linear inseparable
    “IDENTITY” gate. The network has weights of connections and thresholds of
    the processing units as shown below, and it uses the feed forward scheme to
    produce an output.

    The output unit and both hidden units use the threshold activation step- function

    ? ?

    ?? ? ? ? ? ? ? ? l j l j l j l l j j l j S S X f S ? ? 0 , 1 ,

    where
    l=h for a hidden unit
    l=o for the output unit

    Following the feedforward scheme of the input processing, Show that this network produces correct IDENTITY output in response for the input a 1 =0, a 2 =1. For that, answer the following questions: i) What is the correct output of IDENTITY gate for the input a 1 =0, a 2 =1? [2 marks] ii) Find the outputs of the hidden units for the input a 1 =0, a 2 =1; [5 marks]

    iii) Find the network output for the input a 1 =0, a 2 =1.
    [3 marks]
    iv) Does the network produce the correct IDENTITY output in
    response for the input a 1 =0, a 2 =1?
    [1 mark]

  • Artificial Neural Networks Unsupervised Learning.
    Competitive Learning Rule, Kohonen Self-Organised Map.
  • 5(a) Give the simplest mathematical formulation of the Kohonen competitive
    learning rule. Explain how to calculate the correction to the weight of a
    connection according to the instant input. Why is the rule called “winner-
    takes-it-all”? Why does the Kohonen Self-Organised Map represent
    unsupervised learning?
    [8 marks]

    5(b) The neural network below uses the “winner-takes-it-all” learning rule. At

    some instant t during the network training, inputs to the network and the

    weights of connections are as shown below. (to be continued)

    w 11 = 5

    θ 1 = 1 θ 2 = 1

    w 21 = 1

    X 2

    w 12 = (^1) X 1

    w 22 = 1

    w 13 = 5

    w 23 = 1

    a 1 = 1

    a 2 = 2

    a 3 = 2

    Thus,

    the instant input vector is a = {a 1 ; a 2 ; a 3 }={1; 2 ; 2};

    the fan-in vector of the weights of connections to the 1st output unit is

    w 1 ={w 11 ; w 12 ; w 13 }={5; 1 ; 5};

    the fan-in vector of the weights of connections to the 2nd output unit is

    w 2 ={w 21 ; w 22 ; w 23 }={1; 1 ; 1};

    i) Calculate the state S 1 of the first output unit, and the state S 2 of the second output unit at that instant. [4 marks]

    ii) What instant output X = {X 1 , X 2 } will the network produce?

    [2 marks] iii) Let the network learning rate C be set to 0.25.

    Calculate changes to the weights of connections Δwji at that instant.

    [4 marks]

    iv) What will be the new updated weights of connections wji at that

    instant?
    [1 mark]

    v) Let the norm of the network weights of connections be defined as

    (^) ??
    ? ?

    ?

    2 1 3 1 2 j i w wji

    What will be the new normalised weights of connections wji , j=1,2,

    i=1,2,3? [6 marks]

    PAPER CODE COMP305 Page 9 of 9 End

    Genetic Algorithms.

    6(a) Discuss the computational appeal of natural evolution. In particular, consider
    parallelism, adaptation to changing environment, and optimisation of possible
    “solutions”.
    [6 marks]

    6(b) Describe the basic structure of a Genetic Algorithm.
    [6 marks]

    6? What is a Genetic Algorithm chromosome building block, i.e. schema? What
    characters are used to describe schemas of a binary chromosome? What is the
    order and the defining length of a schema?
    [5 marks]

    6(d) Fill in the table below with all the schemas of the chromosome “CH” and their
    corresponding orders and defining lengths.

    Schema Order Defining Length [2 marks]

    6(e) Define the fitness f of a bit string x of length l=4 to be the integer represented by
    the binary number x. (e.g., f (0011) = 3, f (1111) = 15).

    i) What is the average fitness of the schema 10 under f?
    [3 marks]

    ii) What is the average fitness of the schema 0*1* under f? [3 marks]

    PAPER CODE NO.

    COMP 305

    考官:Irina V. Biktasheva博士 Irina V. Biktasheva博士 系別:計(jì)算機(jī)科學(xué)系 電話:54267 電話:54267

    2017年第一學(xué)期考試/

    生物計(jì)算

    允許的時(shí)間: 兩個(gè)半小時(shí)

    ##對(duì)候選人的說(shuō)明。

    □回答四個(gè)問(wèn)題。

    如果你試圖回答的問(wèn)題超過(guò)了規(guī)定的數(shù)目

    ##題,多答題的分?jǐn)?shù)為

    棄權(quán)(從最低分開(kāi)始)。

    每題25分

    1歷史和概念。

    1(a)為什么生物學(xué)啟發(fā)的是人工神經(jīng)網(wǎng)絡(luò)和遺傳算法?
    現(xiàn)在被認(rèn)為是計(jì)算機(jī)科學(xué)的一部分,而不是計(jì)算生物學(xué)的一部分?
    [2分]

    1(b)人工神經(jīng)網(wǎng)絡(luò)能解決哪些一般問(wèn)題?給出一個(gè)
    幾個(gè)例子。
    [4分]

    1(三)遺傳算法可以解決哪些問(wèn)題?請(qǐng)舉出幾個(gè)
    舉例說(shuō)明。
    [4分]

    1(d)大量的學(xué)術(shù)文本在網(wǎng)上儲(chǔ)存和提供。
    儲(chǔ)存庫(kù)。對(duì)于較舊的文本,有必要將紙質(zhì)文件轉(zhuǎn)換為紙質(zhì)文件。
    電子文件。光學(xué)字符識(shí)別(OCR)系統(tǒng)可以實(shí)現(xiàn)以下功能
    用于這一目的。

    一)OCR系統(tǒng)解決了什么樣的問(wèn)題? [3分] 二)OCR系統(tǒng)的輸入和輸出是什么?說(shuō)明你的答案 并舉例說(shuō)明。 [5分] 三)為什么這樣的系統(tǒng)需要監(jiān)督學(xué)習(xí)? [3分] 四)對(duì)于這個(gè)任務(wù),將使用什么數(shù)據(jù)來(lái)訓(xùn)練人工神經(jīng)網(wǎng)絡(luò)? [4分]

    2 McCulloch-Pitts神經(jīng)元。

    2 (a)畫(huà)出McCulloch-Pitts神經(jīng)元(MP-neuron)算法的流程圖,該算法為
    用于計(jì)算對(duì)特定輸入的輸出。
    NB. 假設(shè)所有連接的權(quán)重和神經(jīng)元閾值都被設(shè)置為
    預(yù)先。
    [8分]

    2 (b) 畫(huà)出一個(gè)MP-神經(jīng)元實(shí)現(xiàn)的示意圖,并解釋其工作原理。
    "OR "邏輯門。
    [4分]

    2?畫(huà)出一個(gè)MP-神經(jīng)元實(shí)現(xiàn)的示意圖并解釋其工作原理。
    "NOT "邏輯門。
    [5分]

    2(d)運(yùn)用本題(a-c)部分的答案,推導(dǎo)出輸出X為

    ##下面的MP-神經(jīng)元網(wǎng)絡(luò)對(duì)輸入a 1 =0,a 2 =1,a 3 =。

    [8分]

    人工神經(jīng)網(wǎng)絡(luò)的3個(gè)學(xué)習(xí)規(guī)則**。Hebb規(guī)則.**

    3(a)什么是人工神經(jīng)網(wǎng)絡(luò)的學(xué)習(xí)規(guī)則?[3分]

    3(b)給出希伯學(xué)習(xí)法則的最簡(jiǎn)單數(shù)學(xué)公式。解釋
    如何根據(jù)連接的權(quán)重計(jì)算修正。
    即時(shí)輸入和輸出。[4分]

    3?為什么希伯規(guī)則被稱為 “活動(dòng)積規(guī)則”?[1分]

    3(d)為什么Hebb規(guī)則能代表無(wú)監(jiān)督學(xué)習(xí)?[2分]

    3(e)下面的神經(jīng)網(wǎng)絡(luò)使用Hebb的學(xué)習(xí)規(guī)則。

    讓時(shí)間步t=1時(shí)連接的初始權(quán)重為 w 1 t=1 = 1,w 2 t=1 = 0 ,w 3 t=1 = - 1。 網(wǎng)絡(luò)的學(xué)習(xí)率C為0.25,即C=0.25。 填寫下表 時(shí)間 步驟 a 1 t^ a 2 t^ a 3 t^ w 1 t w 2 t w 3 t Xt ?w 1 t ?w 2 t ?w 3 t w 1 t+1 w 2 t+1 w 3 t+。 t=1 1 1 1 1 0 - 1 t=2 1 1 0 t=3 1 0 0 通過(guò)計(jì)算 網(wǎng)絡(luò)輸出值Xt。 ?w 1 t、?w 2 t 和 ?w 3 t 三個(gè)連接的權(quán)重變化。 新權(quán)重(wnt+1) 一)在時(shí)間步長(zhǎng)t=1時(shí)[5分]。 二)在時(shí)間步長(zhǎng)t=2時(shí)[5分]。 三)在時(shí)間步驟t=3時(shí)[5分]。
  • 監(jiān)督學(xué)習(xí)。Perceptron。
  • 4(a)描述Perceptron的兩層全互連結(jié)構(gòu)。什么是
    一個(gè)偏置輸入單元?
    [3分]

    4(b)什么是Perceptron訓(xùn)練集?它在糾錯(cuò)過(guò)程中是如何使用的?
    Perceptron的訓(xùn)練?如何計(jì)算輸出單元的誤差并將其用于
    定義對(duì)連接的Perceptron權(quán)重的修正(即什么是
    Perceptron學(xué)習(xí)規(guī)則)?)
    [7分]

    4?感知器只能計(jì)算線性可分離函數(shù),即函數(shù)
    其中輸入空間中函數(shù)值(輸出)為 "0 "的點(diǎn)可以是
    與函數(shù)值為 "1 "的點(diǎn)用一條線分開(kāi)。

    利用坐標(biāo)平面對(duì)輸入a 1和a 2進(jìn)行顯示,"IDENTITY "門 (見(jiàn)下表)是一個(gè)線性不可分割的函數(shù)。

    A 1 A 2 “IDENTITY”

    1 1 1

    1 0 0

    0 1 0

    0 0 1

    解釋一下你的答案。[4分] (待續(xù))

    4(d)下圖所示的3層網(wǎng)絡(luò)實(shí)現(xiàn)了線性不可分離
    "IDENTITY "門。網(wǎng)絡(luò)中連接的權(quán)重和閾值為
    處理單元,如下圖所示,它采用前饋方案以
    產(chǎn)生產(chǎn)出。

    輸出單元和兩個(gè)隱藏單元都使用閾值激活步長(zhǎng)-。 功能

    l j l j l j l l j j l j S S X f S 0 , 1 ,

    哪兒
    l=h為隱藏單位
    輸出單元的l=o

    按照輸入處理的前饋方案。 顯示該網(wǎng)絡(luò)在以下情況下產(chǎn)生正確的IDENTITY輸出。 輸入a 1=0,a 2=1的響應(yīng)。 為此,請(qǐng)回答以下問(wèn)題。 一)輸入的IDENTITY門的正確輸出是什么? A 1=0,A 2=1? [2分] 二)求輸入a 1=0,a 2=1的隱藏單元的輸出。 [5分]

    三)求輸入a 1=0,a 2=1的網(wǎng)絡(luò)輸出。
    [3分]
    四)網(wǎng)絡(luò)是否產(chǎn)生正確的IDENTITY輸出,在
    輸入a 1=0,a 2=1的響應(yīng)?
    [1分]

  • 人工神經(jīng)網(wǎng)絡(luò)無(wú)監(jiān)督學(xué)習(xí)。
    競(jìng)爭(zhēng)性學(xué)習(xí)規(guī)則,Kohonen自組織圖。
  • 5(a)給出Kohonen競(jìng)爭(zhēng)性學(xué)習(xí)的最簡(jiǎn)單的數(shù)學(xué)計(jì)算公式
    學(xué)習(xí)規(guī)則。解釋如何計(jì)算對(duì)權(quán)重的修正。
    根據(jù)即時(shí)輸入的情況進(jìn)行連接。為什么這個(gè)規(guī)則被稱為 “贏家”?
    占有一切"?為什么科霍寧自組織地圖代表了 “自組織”?
    無(wú)監(jiān)督學(xué)習(xí)?
    [8分]

    5(b)下面的神經(jīng)網(wǎng)絡(luò)采用 "贏家通吃 "的學(xué)習(xí)規(guī)則。在

    ##在網(wǎng)絡(luò)訓(xùn)練期間的某個(gè)瞬間t,網(wǎng)絡(luò)的輸入和。

    連接的權(quán)重如下所示。 (待續(xù))

    w 11 = 5

    θ 1 = 1 θ 2 = 1

    w 21 = 1

    X 2

    w 12 = (^1) X 1

    w 22 = 1

    w 13 = 5

    w 23 = 1

    a 1 = 1

    a 2 = 2

    a 3 = 2

    因此:

    瞬間輸入向量為a = {a 1 ; a 2 ; a 3 }={1; 2 ; 2}。

    第1個(gè)輸出單元的連接權(quán)重的扇形輸入向量是

    W 1 ={W 11 ; W 12 ; W 13 }={5; 1 ; 5};

    連接到第2個(gè)輸出端的權(quán)重的扇形輸入向量。 單位是

    W 2 ={W 21 ; W 22 ; W 23 }={1; 1 ; 1};

    一)計(jì)算第一輸出單元的狀態(tài)S1,以及第一輸出單元的狀態(tài)S2。 在該瞬間,第二個(gè)輸出單元。 [4分]

    二)網(wǎng)絡(luò)將產(chǎn)生什么即時(shí)輸出X = {X 1 , X 2 }?

    [2分] 三)設(shè)網(wǎng)絡(luò)學(xué)習(xí)率C為0.25。

    ##計(jì)算該時(shí)刻連接權(quán)重Δwji的變化。

    [4分]

    ##四)什么將是新的更新權(quán)重的連接wji在該。

    瞬?
    [1分]

    五)讓連接的網(wǎng)絡(luò)權(quán)重的規(guī)范定義為: 1.

    (^)

    2 1 3 1 2 j i w wji

    ##什么將是新的規(guī)范化權(quán)重的連接wji ,j=1,2。

    i=1,2,3? [6分]

    文件編號(hào) COMP305 第 9 頁(yè),共 9 頁(yè)。

    遺傳算法。

    6(a)討論自然進(jìn)化的計(jì)算魅力。特別是,考慮
    并行性,適應(yīng)不斷變化的環(huán)境,并優(yōu)化可能的環(huán)境。
    “解決方案”。
    [6分]

    6(b)描述遺傳算法的基本結(jié)構(gòu)。
    [6分]

    6?什么是遺傳算法染色體構(gòu)件,即模式?什么是
    字符是用來(lái)描述二元染色體的模式?是什么?
    順序和模式的定義長(zhǎng)度?
    [5分]

    6(d)在下表中填入 "CH "染色體的所有圖式及其。
    相應(yīng)的順序和定義長(zhǎng)度。

    模式順序定義長(zhǎng)度 [2分]

    6(e)定義長(zhǎng)度為l=4的比特串x的適格度f(wàn)為整數(shù),用以下方法表示: 1.
    二進(jìn)制數(shù)x.(如:f(0011)=3,f(1111)=15)。

    一)在f下,模式10的平均適合度是多少?
    〔3分

    二)在f下,模式0*1*的平均適合度是多少? [3分]

    總結(jié)

    以上是生活随笔為你收集整理的仿生计算(参考神经网络)2017年考试卷子,考前抱佛脚必备!!中英翻译版本!!的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

    如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。