日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

二、OCR训练时,将txt文件和图片数据转为lmdb文件格式

發布時間:2023/11/27 生活经验 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 二、OCR训练时,将txt文件和图片数据转为lmdb文件格式 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 前言
  • 一、背景?
  • 二、直接上內容
    • 1.代碼
    • 2.文件說明


前言

隨著人工智能的不斷發展,機器學習這門技術也越來越重要,本文就介紹OCR訓練前ldmb文件制作的基礎內容。

提示:以下是本篇文章正文內容,下面案例可供參考

一、背景?

示例:是基于ocr訓練而作。

二、直接上內容

1.代碼

代碼如下(示例):


# -*- coding:utf-8 -*-import os
import lmdb  # ?pip install?????
import cv2
import glob  #????????????
import numpy as npdef checkImageIsValid(imageBin):if imageBin is None:return FalseimageBuf = np.fromstring(imageBin, dtype=np.uint8)img = cv2.imdecode(imageBuf, cv2.IMREAD_GRAYSCALE)if img is None:return FalseimgH, imgW = img.shape[0], img.shape[1]if imgH * imgW == 0:return Falsereturn Truedef writeCache(env, cache):with env.begin(write=True) as txn:# for k, v in cache.iteritems():  #python2for k, v in cache.items():  #python3txn.put(k.encode(), str(v).encode())def createDataset(outputPath, imagePathList, labelList, lexiconList=None, checkValid=True):"""Create LMDB dataset for CRNN training.
#    ARGS:outputPath    : LMDB output pathimagePathList : list of image pathlabelList     : list of corresponding groundtruth textslexiconList   : (optional) list of lexicon listscheckValid    : if true, check the validity of every image"""# print (len(imagePathList) , len(labelList))assert (len(imagePathList) == len(labelList))nSamples = len(imagePathList)print('...................')env = lmdb.open(outputPath, map_size=8589934592)  # 1099511627776)????????????????1T?????8g???????????????????cache = {}cnt = 1for i in range(nSamples):imagePath = imagePathList[i]label = labelList[i]if not os.path.exists(imagePath):print('%s does not exist' % imagePath)continuewith open(imagePath, 'r') as f:imageBin = f.read()if checkValid:if not checkImageIsValid(imageBin):print('%s is not a valid image' % imagePath)  # ??????linux????f.read??????????????continueimageKey = 'image-%09d' % cntlabelKey = 'label-%09d' % cntcache[imageKey] = imageBincache[labelKey] = labelif lexiconList:lexiconKey = 'lexicon-%09d' % cntcache[lexiconKey] = ' '.join(lexiconList[i])if cnt % 1000 == 0:writeCache(env, cache)cache = {}print('Written %d / %d' % (cnt, nSamples))cnt += 1nSamples = cnt - 1cache['num-samples'] = str(nSamples)writeCache(env, cache)print('Created dataset with %d samples' % nSamples)def read_text(path):with open(path) as f:text = f.read()text = text.strip()return textif __name__ == '__main__':# lmdb ????outputPath = r'E:\enducate\test_paper\Train_code\train'  # ?????????????????????path = r"E:\enducate\test_paper\Train_code\data22222\*.png"  # ?txt?jpg???????????imagePathList = glob.glob(path)print('------------', len(imagePathList), '------------')imgLabelLists = []for p in imagePathList:try:imgLabelLists.append((p, read_text(p.replace('.jpg', '.txt'))))except:continue# imgLabelList = [ (p, read_text(p.replace('.jpg', '.txt'))) for p in imagePathList]# sort by labelListimgLabelList = sorted(imgLabelLists, key=lambda x: len(x[1]))imgPaths = [p[0] for p in imgLabelList]txtLists = [p[1] for p in imgLabelList]createDataset(outputPath, imgPaths, txtLists, lexiconList=None, checkValid=True)

2.文件說明

第一種文件格式
圖片路徑和txt標簽文件共存。(1張圖片對應1個txt標簽文件)

txt文件和圖片是放在一個文件夾的。


第二種文件格式如下:
文件有多張圖片和1個txt文件組成,其中txt文件是包括所有圖片的標簽。
格式為:圖片路徑名+\t+標簽。

// A code block
var foo = 'bar';
// An highlighted block""" a modified version of CRNN torch repository https://github.com/bgshih/crnn/blob/master/tool/create_dataset.py """import fire
import os
import lmdb
import cv2import numpy as npdef checkImageIsValid(imageBin):if imageBin is None:return FalseimageBuf = np.frombuffer(imageBin, dtype=np.uint8)img = cv2.imdecode(imageBuf, cv2.IMREAD_GRAYSCALE)imgH, imgW = img.shape[0], img.shape[1]if imgH * imgW == 0:return Falsereturn Truedef writeCache(env, cache):with env.begin(write=True) as txn:for k, v in cache.items():txn.put(k, v)def createDataset(inputPath, gtFile, outputPath, checkValid=True):"""Create LMDB dataset for training and evaluation.ARGS:inputPath  : input folder path where starts imagePathoutputPath : LMDB output pathgtFile     : list of image path and labelcheckValid : if true, check the validity of every image"""os.makedirs(outputPath, exist_ok=True)env = lmdb.open(outputPath, map_size=1099511627776)cache = {}cnt = 1with open(gtFile, 'r', encoding='utf-8') as data:datalist = data.readlines()nSamples = len(datalist)for i in range(nSamples):imagePath, label = datalist[i].strip('\n').split('\t')# imagePath = os.path.join(inputPath, imagePath)# # only use alphanumeric data# if re.search('[^a-zA-Z0-9]', label):#     continueif not os.path.exists(imagePath):print('%s does not exist' % imagePath)continuewith open(imagePath, 'rb') as f:imageBin = f.read()if checkValid:try:if not checkImageIsValid(imageBin):print('%s is not a valid image' % imagePath)continueexcept:print('error occured', i)with open(outputPath + '/error_image_log.txt', 'a') as log:log.write('%s-th image data occured error\n' % str(i))continueimageKey = 'image-%09d'.encode() % cntlabelKey = 'label-%09d'.encode() % cntcache[imageKey] = imageBincache[labelKey] = label.encode()if cnt % 1000 == 0:writeCache(env, cache)cache = {}print('Written %d / %d' % (cnt, nSamples))cnt += 1nSamples = cnt-1cache['num-samples'.encode()] = str(nSamples).encode()writeCache(env, cache)print('Created dataset with %d samples' % nSamples)if __name__ == '__main__':fire.Fire(createDataset)# python create_lmdb_dataset.py --inputPath /data2/ --gtFile /data2/meterdataset/digital_dataset/otherdataset/1030_data/collect_val.txt --outputPath /data2/meterdataset/digital_dataset/otherdataset/1030_data/2021-0507-result/val

第二個運行說明:
> 輸入路徑:inputPath 操作的文件夾
txt文件: gtFile
ldmb輸出的文件路徑: outputPath

// An highlighted block
python create_lmdb_dataset.py --inputPath /data2/ --gtFile /data2/enducation/paper_recog_total/train-paper-recog/Recognization/deep-text-recognition-SHENG/data/text_recog/txt4val/img_gt/gt.txt --outputPath /data2/enducation/paper_recog_total/train-paper-recog/Recognization/deep-text-recognition-SHENG/data/val

總結

以上是生活随笔為你收集整理的二、OCR训练时,将txt文件和图片数据转为lmdb文件格式的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。