日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

TensorFlow实现Unet遥感图像分割

發布時間:2024/1/23 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 TensorFlow实现Unet遥感图像分割 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Unet是一種U型網絡,分為左右兩部分卷積,左邊為下采樣提取高維特征,右邊為上采樣并與左側融合實現圖像分割。這里使用TensorFlow實現Unet網絡,實現對遙感影像的道路分割。

訓練數據:

標簽圖像:

?

Unet實現:

import tensorflow as tf import numpy as np import cv2 import glob import itertoolsclass UNet:def __init__(self,input_width,input_height,num_classes,train_images,train_instances,val_images,val_instances,epochs,lr,lr_decay,batch_size,save_path):self.input_width = input_widthself.input_height = input_heightself.num_classes = num_classesself.train_images = train_imagesself.train_instances = train_instancesself.val_images = val_imagesself.val_instances = val_instancesself.epochs = epochsself.lr = lrself.lr_decay = lr_decayself.batch_size = batch_sizeself.save_path = save_pathdef leftNetwork(self, inputs):x = tf.keras.layers.Conv2D(64, (3, 3), padding='valid', activation='relu')(inputs)o_1 = tf.keras.layers.Conv2D(64, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2, 2))(o_1)x = tf.keras.layers.Conv2D(128, (3, 3), padding='valid', activation='relu')(x)o_2 = tf.keras.layers.Conv2D(128, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(o_2)x = tf.keras.layers.Conv2D(256, (3, 3), padding='valid', activation='relu')(x)o_3 = tf.keras.layers.Conv2D(256, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(o_3)x = tf.keras.layers.Conv2D(512, (3, 3), padding='valid', activation='relu')(x)o_4 = tf.keras.layers.Conv2D(512, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(o_4)x = tf.keras.layers.Conv2D(1024, (3, 3), padding='valid', activation='relu')(x)o_5 = tf.keras.layers.Conv2D(1024, (3, 3), padding='valid', activation='relu')(x)return [o_1, o_2, o_3, o_4, o_5]def rightNetwork(self, inputs):c_1, c_2, c_3, c_4, o_5 = inputso_5 = tf.keras.layers.UpSampling2D((2, 2))(o_5)x = tf.keras.layers.concatenate([tf.keras.layers.Cropping2D(4)(c_4), o_5], axis=3)x = tf.keras.layers.Conv2D(512, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.Conv2D(512, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.UpSampling2D((2, 2))(x)x = tf.keras.layers.concatenate([tf.keras.layers.Cropping2D(16)(c_3), x], axis=3)x = tf.keras.layers.Conv2D(256, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.Conv2D(256, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.UpSampling2D((2, 2))(x)x = tf.keras.layers.concatenate([tf.keras.layers.Cropping2D(40)(c_2), x], axis=3)x = tf.keras.layers.Conv2D(128, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.Conv2D(128, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.UpSampling2D((2, 2))(x)x = tf.keras.layers.concatenate([tf.keras.layers.Cropping2D(88)(c_1), x], axis=3)x = tf.keras.layers.Conv2D(64, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.Conv2D(64, (3, 3), padding='valid', activation='relu')(x)x = tf.keras.layers.Conv2D(self.num_classes, (1, 1), padding='valid')(x)x = tf.keras.layers.Activation('softmax')(x)return xdef build_model(self):inputs = tf.keras.Input(shape=[self.input_height, self.input_width, 3])left_output = self.leftNetwork(inputs)right_output = self.rightNetwork(left_output)model = tf.keras.Model(inputs=inputs, outputs=right_output)return modeldef train(self):G_train = self.dataGenerator(model='training')G_eval = self.dataGenerator(model='validation')#model = self.build_model()model = tf.keras.models.load_model('model.h5')model.compile(optimizer=tf.keras.optimizers.Adam(self.lr, self.lr_decay),loss='categorical_crossentropy',metrics=['accuracy'])model.fit_generator(G_train, 5, validation_data=G_eval, validation_steps=5, epochs=self.epochs)model.save(self.save_path)def dataGenerator(self, model):if model == 'training':images = glob.glob(self.train_images + '*.jpg')images.sort()instances = glob.glob(self.train_instances + '*.png')instances.sort()zipped = itertools.cycle(zip(images, instances))while True:x_train = []y_train = []for _ in range(self.batch_size):img, seg = next(zipped)img = cv2.resize(cv2.imread(img, 1), (self.input_width, self.input_height)) / 255.0seg = tf.keras.utils.to_categorical(cv2.imread(seg, 0), self.num_classes)x_train.append(img)y_train.append(seg)yield np.array(x_train), np.array(y_train)if model == 'validation':images = glob.glob(self.val_images + '*.jpg')images.sort()instances = glob.glob(self.val_instances + '*.png')instances.sort()zipped = itertools.cycle(zip(images, instances))while True:x_eval = []y_eval = []for _ in range(self.batch_size):img, seg = next(zipped)img = cv2.resize(cv2.imread(img, 1), (self.input_width, self.input_height)) / 255.0seg = tf.keras.utils.to_categorical(cv2.imread(seg, 0), self.num_classes)x_eval.append(img)y_eval.append(seg)yield np.array(x_eval), np.array(y_eval)

?訓練腳本:

unet = UNet(input_width=572,input_height=572,num_classes=2,train_images='./datasets/train/images/',train_instances='./datasets/train/instances/',val_images='./datasets/validation/images/',val_instances='./datasets/validation/instances/',epochs=100,lr=0.0001,lr_decay=0.00001,batch_size=100,save_path='model.h5' )unet.train()

這里僅分割道路和背景,屬于二分類,輸出矩陣形狀為2*388*388,進行100輪訓練后保存模型進行推理驗證。

推理腳本:

import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import cv2model = tf.keras.models.load_model('model.h5')img = '17.jpg' img = cv2.resize(cv2.imread(img), (572, 572)) / 255. img = np.expand_dims(img, 0) pred = model.predict(img) pred = np.argmax(pred[0], axis=-1) pred[pred == 1] = 255 cv2.imwrite('result.jpg', pred) plt.imshow(pred) plt.show()

測試圖像:

推理結果:

將推理結果與原始圖像疊加顯示:?

import cv2img_path = '17.jpg' result_path = 'result.jpg' img = cv2.imread(img_path) height, width = img.shape[:2] result = cv2.imread(result_path) result = cv2.resize(result, (height, width), cv2.INTER_LINEAR) result = cv2.Canny(result, 0, 255) for i in range(height):for j in range(width):if result[i][j] == 255:img[i][j] = [0, 0, 255] cv2.imwrite('temp.jpg', result) cv2.imwrite('out.jpg', img)

總結

以上是生活随笔為你收集整理的TensorFlow实现Unet遥感图像分割的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 麻豆com| 99热这里只有精 | 美女扒开屁股让男人桶 | 日日干夜夜操 | 久久最新| 色干干 | 久久99精品久久只有精品 | 美女毛片在线观看 | 欧美色老头 | 成熟丰满熟妇高潮xxxxx视频 | 在线看91| 天天干干干 | 超碰97人人草 | www.色悠悠 | 亚州国产精品 | 久久婷婷婷 | 天天夜夜操 | 精品久久久久久久久中文字幕 | 亚洲婷婷在线视频 | 国产精品久久久久久久久久免费看 | 日日射视频 | av男人的天堂网 | 毛片免费在线观看视频 | 色眯眯网 | 亚洲天堂av中文字幕 | 午夜寂寞院 | 伊人久久九 | 国产一区二区三区视频播放 | 午夜精品久久久久久99热 | 国产精品一区二区精品 | eeuss一区二区 | 丰满饥渴老女人hd | 日本不卡一区视频 | 欧美激情aaa | 高清国产一区二区三区 | 国产精品毛片久久久久久久 | 九九热在线视频免费观看 | 中文字幕.com | 哺乳期av| 久久久二区 | 色窝av| 成人影音在线 | 日本不卡二区 | 久久国产精品波多野结衣 | 国产精品99久久久久久宅男 | 99日韩 | 国产一区二区高清 | 欧美性jizz18性欧美 | 韩国伦理片在线观看 | 又色又爽又黄18网站 | 欧美久久久精品 | 欧美一级黄色片 | 最新中文av| 久久久亚洲欧洲 | 狠狠干五月天 | 国产一区免费观看 | 69人妻精品久久无人专区 | 黄色网免费 | 性综艺节目av在线播放 | 日韩美女免费视频 | 久久国产免费看 | 北岛玲一区二区 | 欧美成人h版 | 亚洲精品99久久久久中文字幕 | 狠狠婷婷| av无线看 | 欧美一级片一区二区 | 伊人色婷婷 | 色哟哟黄色 | 欧美少妇视频 | 波多野结衣久久 | 午夜免费福利影院 | 樱桃国产成人精品视频 | 综合人人| 久久久久久9999 | 区一区二在线观看 | 日韩国产精品视频 | 国产成人97精品免费看片 | 特级淫片裸体免费看冫 | 美女的胸给男人玩视频 | 红色假期黑色婚礼2 | 中文成人无字幕乱码精品区 | 欧美视频在线观看一区二区 | 大陆极品少妇内射aaaaaa | 国产特黄 | а√天堂资源在线 | 色爽| 成人无码av片在线观看 | 色妞ww精品视频7777 | 欧美黄色大片在线观看 | 无码内射中文字幕岛国片 | 免费毛片一区二区三区 | 在线看片亚洲 | 妹子干综合 | 阿v免费在线观看 | 亚洲一区二区三区加勒比 | 92久久精品一区二区 | 香蕉视频97 | 特大黑人巨交吊性xxxx视频 |