日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

gan简介_GAN简介

發(fā)布時間:2023/12/15 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 gan简介_GAN简介 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

gan簡介

目錄: (TABLE OF CONTENTS:)

  • INTRODUCTION

    介紹
  • HISTORY OF GANs

    GAN的歷史
  • INTUITIVE EXPLANATION OF GANs

    GAN的直觀說明
  • TRAINING GANs

    訓練甘
  • GAN TRAINING PROCESS

    GAN訓練過程
  • GAN BLOCK DIAGRAM

    甘塊圖
  • KERAS IMPLEMENTATION OF GAN ON MNIST DATASET

    GAN在MNIST數據集上的KERAS實現。
  • 介紹 (INTRODUCTION)

    Generative Adversarial Networks also commonly referred to as GANs are used to generate images without very little or no input. GANs allow us to generate images created by our Neural Networks, completely removing a human (yes you) out of the loop. Before we dive into the theory, I like showing you the abilities of GANs to build your excitement. Turn Horses into Zebras (vice versa).

    生成對抗網絡通常也稱為GAN,用于生成很少輸入或沒有輸入的圖像。 GAN使我們能夠生成由我們的神經網絡創(chuàng)建的圖像,從而完全消除了人類(是的)。 在我們深入理論之前,我喜歡向您展示GAN激發(fā)您的激情的能力。 將馬匹變成斑馬(反之亦然)。

    GAN的歷史 (HISTORY OF GANs)

    Generative adversarial networks (GANs) was introduced by Ian Goodfellow (the GANFather of GANs) et al. in 2014, in his paper appropriately titled “Generative Adversarial Networks”. It was proposed as an alternative to Variational Auto Encoders (VAEs) which learn the latent spaces of images, to generate synthetic images. It’s aimed to create realistic artificial images that could be almost indistinguishable from real ones.

    生成對抗網絡(GANs)由Ian Goodfellow(GANs的GANFather)等人引入。 2014年,他在論文中恰當地題為“ Generative Adversarial Networks”。 它被提議作為變分自動編碼器(VAE)的替代方法,后者學習圖像的潛在空間,以生成合成圖像。 它旨在創(chuàng)建逼真的人造圖像,與真實圖像幾乎無法區(qū)分。

    GAN的直觀解釋 (INTUITIVE EXPLANATION OF GAN)

    Imagine there’s an ambitious young criminal who wants to counterfeit money and sell to a mobster who specializes in handling counterfeit money. At first, the young counterfeiter is not good and our expert mobster tells him, he’s money is way off from looking real. Slowly he gets better and makes a good ‘copy’ every so often. The mobster tells him when it’s good. After some time, both the forger (our counterfeiter) and expert mobster get better at their jobs and now they have created almost real looking but fake money.

    想象一下,有一個雄心勃勃的年輕罪犯想要偽造貨幣并出售給專門處理偽造貨幣的黑幫。 起初,年輕的造假者不好,而我們的專家流氓告訴他,他的錢與真實的相去甚遠。 慢慢地,他變得更好,并且每隔一段時間就會做出一個好的“副本”。 暴徒告訴他什么時候好。 一段時間之后,偽造者(我們的偽造者)和專家流氓都在工作上變得更好,現在他們創(chuàng)造了幾乎是真實的但偽造的錢。

    生成器和鑒別器網絡: (The Generator & Discriminator Networks:)

    ● The purpose of the Generator Network is to take a random image initialization and decode it into a synthetic image.● The purpose of the Discriminator Network is to take this input and predict whether this image came from a real dataset or is synthetic.

    ●生成器網絡的目的是進行隨機圖像初始化并將其解碼為合成圖像。●鑒別器網絡的目的是獲取此輸入并預測此圖像是來自真實數據集還是合成圖像。

    ●As we just saw, this is effectively what GANs are, two antagonistic networks that are contesting against each other. The two components are called:

    ●正如我們剛剛看到的,這實際上就是GAN,這是兩個相互競爭的對立網絡。 這兩個組件稱為:

  • Generator Network — in our example this was the young criminal creating counterfeit money.

    生成器網絡-在我們的示例中,這是年輕的罪犯制造假幣。
  • Discriminator Network — the mobster in our example.

    歧視者網絡-本例中的流氓。
  • 訓練甘 (TRAINING GANs)

    ● Training GANs is notoriously difficult. In CNN’s we used gradient descent to change our weights to reduce our loss.

    ●眾所周知,訓練GAN十分困難。 在CNN中,我們使用梯度下降來更改權重以減少損失。

    ● However, in a GANs, every weight change changes the entire balance of our dynamic system.

    ●但是,在GAN中,每次重量變化都會改變動態(tài)系統(tǒng)的整體平衡。

    ● In GAN’s we are not seeking to minimize loss, but finding an equilibrium between our two opposing Networks.

    ●在GAN中,我們并不是要盡量減少損失,而是要在兩個相對的網絡之間找到平衡。

    GAN訓練過程 (THE GAN TRAINING PROCESS)

    1. Input randomly generates noisy images into our Generator Network to generate a sample image.

    1.輸入將隨機生成的噪聲圖像生成到我們的生成器網絡中,以生成樣本圖像。

    2. We take some sample images from our real data and mix it with some of our generated images.

    2.我們從真實數據中獲取一些樣本圖像,并將其與一些生成的圖像混合。

    3. Input these mixed images to our Discriminator who will then be trained on this mixed set and will update it’s weights accordingly.

    3.將這些混合圖像輸入到我們的鑒別器中,鑒別器隨后將在此混合集合上進行訓練,并將相應地更新其權重。

    4. We then make some more fake images and input them into the Discriminator but we label all as real. This is done to train the Generator. We’ve frozen the weights of the discriminator at this stage (Discriminator learning stops), and we use the feedback from the discriminator to now update the weights of the generator. This is how we teach both our Generator (to make better synthetic images) and Discriminator to get better at spotting fakes.

    4.然后,我們制作更多偽造的圖像并將其輸入到鑒別器中,但我們將所有標簽標記為真實。 這樣做是為了訓練發(fā)電機。 在此階段,我們已經凍結了鑒別器的權重(區(qū)分器學習停止),現在我們使用鑒別器的反饋來更新生成器的權重。 這就是我們教導發(fā)生器(以生成更好的合成圖像)和鑒別器以更好地發(fā)現假貨的方式。

    GAN框圖 (GAN Block Diagram)

    GAN Block DiagramGAN框圖

    For this article, we will be generating handwritten numbers using the MNIST dataset. The architecture for this GAN is :

    對于本文,我們將使用MNIST數據集生成手寫數字。 該GAN的體系結構為:

    GAN在MNIST數據集上的KERAS實現。 (KERAS IMPLEMENTATION OF GAN ON MNIST DATASET)

    The entire code for the project can be found here.

    該項目的完整代碼可以在這里找到。

    First, we load all the necessary libraries

    首先,我們加載所有必要的庫

    import os
    os.environ["KERAS_BACKEND"] = "tensorflow"
    import numpy as np
    from tqdm import tqdm
    import matplotlib.pyplot as plt
    from keras.layers import Input
    from keras.models import Model, Sequential
    from keras.layers.core import Reshape, Dense, Dropout, Flatten
    from keras.layers.advanced_activations import LeakyReLU
    from keras.layers.convolutional import Convolution2D, UpSampling2D
    from keras.layers.normalization import BatchNormalization
    from keras.datasets import mnist
    from keras.optimizers import Adam
    from keras import backend as K
    from keras import initializers
    K.set_image_dim_ordering('th')
    # Deterministic output.
    # Tired of seeing the same results every time? Remove the line below.
    np.random.seed(1000)
    # The results are a little better when the dimensionality of the random vector is only 10.
    # The dimensionality has been left at 100 for consistency with other GAN implementations.
    randomDim = 100

    Now we load our dataset. For this blog MNIST dataset is being used, so no dataset needs to be downloaded separately.

    現在,我們加載數據集。 對于此博客,正在使用MNIST數據集,因此無需單獨下載數據集。

    (X_train, y_train), (X_test, y_test) = mnist.load_data()
    X_train = (X_train.astype(np.float32) - 127.5)/127.5
    X_train = X_train.reshape(60000, 784)

    Next, we define the architecture of our generator and discriminator

    接下來,我們定義生成器和鑒別器的架構

    # Optimizer
    adam = Adam(lr=0.0002, beta_1=0.5)#generator
    generator = Sequential()
    generator.add(Dense(256, input_dim=randomDim, kernel_initializer=initializers.RandomNormal(stddev=0.02)))
    generator.add(LeakyReLU(0.2))
    generator.add(Dense(512))
    generator.add(LeakyReLU(0.2))
    generator.add(Dense(1024))
    generator.add(LeakyReLU(0.2))
    generator.add(Dense(784, activation='tanh'))
    generator.compile(loss='binary_crossentropy', optimizer=adam)#discriminator
    discriminator = Sequential()
    discriminator.add(Dense(1024, input_dim=784, kernel_initializer=initializers.RandomNormal(stddev=0.02)))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(Dropout(0.3))
    discriminator.add(Dense(512))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(Dropout(0.3))
    discriminator.add(Dense(256))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(Dropout(0.3))
    discriminator.add(Dense(1, activation='sigmoid'))
    discriminator.compile(loss='binary_crossentropy', optimizer=adam)

    Now we combine our generator and discriminator to train simultaneously.

    現在,我們將生成器和鑒別器結合起來同時進行訓練。

    # Combined network
    discriminator.trainable = False
    ganInput = Input(shape=(randomDim,))
    x = generator(ganInput)
    ganOutput = discriminator(x)
    gan = Model(inputs=ganInput, outputs=ganOutput)
    gan.compile(loss='binary_crossentropy', optimizer=adam)
    dLosses = []
    gLosses = []

    Three functions to plot and save the results after every 20 epochs and save the model.

    每隔20個周期繪制并保存結果并保存模型的三個功能。

    # Plot the loss from each batch
    def plotLoss(epoch):
    plt.figure(figsize=(10, 8))
    plt.plot(dLosses, label='Discriminitive loss')
    plt.plot(gLosses, label='Generative loss')
    plt.xlabel('Epoch')
    plt.ylabel('Loss')
    plt.legend()
    plt.savefig('images/gan_loss_epoch_%d.png' % epoch)
    # Create a wall of generated MNIST images
    def plotGeneratedImages(epoch, examples=100, dim=(10, 10), figsize=(10, 10)):
    noise = np.random.normal(0, 1, size=[examples, randomDim])
    generatedImages = generator.predict(noise)
    generatedImages = generatedImages.reshape(examples, 28, 28)
    plt.figure(figsize=figsize)
    for i in range(generatedImages.shape[0]):
    plt.subplot(dim[0], dim[1], i+1)
    plt.imshow(generatedImages[i], interpolation='nearest', cmap='gray_r')
    plt.axis('off')
    plt.tight_layout()
    plt.savefig('images/gan_generated_image_epoch_%d.png' % epoch)
    # Save the generator and discriminator networks (and weights) for later use
    def saveModels(epoch):
    generator.save('models/gan_generator_epoch_%d.h5' % epoch)
    discriminator.save('models/gan_discriminator_epoch_%d.h5' % epoch)

    The train function

    火車功能

    def train(epochs=1, batchSize=128):
    batchCount = X_train.shape[0] / batchSize
    print 'Epochs:', epochs
    print 'Batch size:', batchSize
    print 'Batches per epoch:', batchCount
    for e in xrange(1, epochs+1):
    print '-'*15, 'Epoch %d' % e, '-'*15
    for _ in tqdm(xrange(batchCount)):
    # Get a random set of input noise and images
    noise = np.random.normal(0, 1, size=[batchSize, randomDim])
    imageBatch = X_train[np.random.randint(0, X_train.shape[0], size=batchSize)]
    # Generate fake MNIST images
    generatedImages = generator.predict(noise)
    # print np.shape(imageBatch), np.shape(generatedImages)
    X = np.concatenate([imageBatch, generatedImages])
    # Labels for generated and real data
    yDis = np.zeros(2*batchSize)
    # One-sided label smoothing
    yDis[:batchSize] = 0.9
    # Train discriminator
    discriminator.trainable = True
    dloss = discriminator.train_on_batch(X, yDis)
    # Train generator
    noise = np.random.normal(0, 1, size=[batchSize, randomDim])
    yGen = np.ones(batchSize)
    discriminator.trainable = False
    gloss = gan.train_on_batch(noise, yGen)
    # Store loss of most recent batch from this epoch
    dLosses.append(dloss)
    gLosses.append(gloss)
    if e == 1 or e % 20 == 0:
    plotGeneratedImages(e)
    saveModels(e)
    # Plot losses from every epoch
    plotLoss(e)
    train(200, 128)

    To stay connected follow me here.

    要保持聯系,請在這里關注我。

    READ MY PREVIOUS BLOG: UNDERSTANDING U-Net from here.

    閱讀我以前的博客: 從這里了解U-Net。

    翻譯自: https://medium.com/analytics-vidhya/introduction-to-gans-38a7a990a538

    gan簡介

    總結

    以上是生活随笔為你收集整理的gan简介_GAN简介的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。