日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【翻译】Apache Hbase新特性--MOB支持(一)

發布時間:2025/3/21 编程问答 20 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【翻译】Apache Hbase新特性--MOB支持(一) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

2019獨角獸企業重金招聘Python工程師標準>>>

原文鏈接:http://blog.cloudera.com/blog/2015/06/inside-apache-hbases-new-support-for-mobs/

HBase MOBs特性的設計背景

Apache HBase is a distributed, scalable, performant, consistent key value database that can store a variety of binary data types. It excels at storing many relatively small values (<10K), and providing low-latency reads and writes.

However, there is a growing demand for storing documents, images, and other moderate objects (MOBs)? in HBase while maintaining low latency for reads and writes. One such use case is a bank that stores signed and scanned customer documents. As another example, transport agencies may want to store? snapshots of traffic and moving cars. These MOBs are generally write-once.

Apache HBase是一個分布式、可擴展,高性能,一致的鍵值數據庫,可以存儲多種多樣的二進制數據。存儲小文件(小于10K)十分出色,讀寫延遲低。

隨之而來,對文檔、圖片和其他中等大小文件的存儲需求日益增長,并且要保持讀寫低延遲。一個典型的場景就是銀行存儲客戶的簽字或掃描的文檔。另一個典型的場景,交通部門保存路況或過車快照。中等大小文件通常寫入一次。

Unfortunately, performance can degrade in situations where many moderately sized values (100K to 10MB) are stored due to the ever-increasing? I/O pressure created by compactions. Consider the case where 1TB of photos from traffic cameras, each 1MB in size, are stored into HBase daily. Parts of the stored files are compacted multiple times via minor compactions and eventually, data is rewritten by major compactions. Along with accumulation of these MOBs, I/O created by compactions will slow down the compactions, further block memstore flushing, and eventually block updates. A big MOB store will trigger frequent region splits, reducing the availability of the affected regions.

In order to address these drawbacks, Cloudera and Intel engineers have implemented MOB support in an HBase branch (hbase-11339: HBase MOB). This branch will be merged to the master in HBase 1.1 or 1.2, and is already present and supported in CDH 5.4.x, as well.?

不幸的是,存儲文件大小在100k到10M之間時,由于壓縮導致的持續增長的讀寫壓力,會導致性能下降。想象一下這樣的場景,交通攝像頭每天產生1TB的照片存到Hbase里,每個文件1MB。一部分文件被多次壓縮以達到最小化。數據因為壓縮被重復寫入。隨著中等大小文件數量的積累,壓縮產生的讀寫會使壓縮變慢,進一步阻塞memstore刷新,最終阻止更新。大量的MOB存儲會觸發頻繁的region分割,相應region的可用性下降。

為了解決這個問題,Cloudera和Intel的工程師在Hbase的分支實現了對MOB的支持。 (hbase-11339: HBase MOB)。(譯者注:這個特性并沒有出現在1.1和1.2版本,而是被合入的2.0.0版本)。你可以在CDH 5.4.x中獲取。

Operations on MOBs are usually write-intensive, with rare updates or deletes and relatively infrequent reads. MOBs are usually stored together with their metadata. Metadata relating to MOBs may include, for instance, car number, speed, and color. Metadata are very small relative to the MOBs. Metadata are usually accessed for analysis, while MOBs are usually randomly accessed only when they are explicitly requested with row keys.

Users want to read and write the MOBs in HBase with low latency in the same APIs, and want strong consistency, security, snapshot and HBase replication between clusters, and so on. To meet these goals, MOBs were moved out of the main I/O path of HBase and into a new I/O path.

In this post, you will learn about this design approach, and why it was selected.

對MOB的操作通常集中在寫入,很少更新或刪除,讀取不頻繁。MOB通常跟元數據一起被存儲。元數據相對MOB很小,通常用來統計分析,而MOB一般通過明確的row key來獲取。

用戶希望在Hbase中用相同的API來讀寫MOB文件,并且集群之間保持低延遲,強一致、安全、快照和Hbase副本等特性。要達到這一目標,必須將MOB從 HBase主要的讀寫目錄移到新的讀寫目錄。

可行方案分析

There were a few possible approaches to this problem. The first approach we considered was to store MOBs in HBase with a tuned split and compaction policies—a bigger desired MaxFileSize decreases the frequency of region split, and fewer or no compactions can avoid the write amplification penalty. That approach would improve write latency and throughput considerably. However, along with the increasing number of stored files, there would be too many opened readers in a single store, even more than what is allowed by the OS. As a result, a lot of memory would be consumed and read performance would degrade.

解決這個問題有潛在的方法。第一種,優化分割(split)和壓縮策略——一個更大的MaxFileSize來降低region分割頻率,減少或者不壓縮來避免寫入惡化。這樣會改善寫入延遲,吞吐量好得多。但是,隨著文件數量的增長,一次存儲會打開非常多的reader,甚至超過操作系統的限制。結果就是內存被耗光,性能下降。

Another approach was to use an HBase + HDFS model to store the metadata and MOBs separately. In this model, a single file is linked by an entry in HBase. This is a client solution, and the transaction is controlled by the client—no HBase-side memories are consumed by MOBs. This approach would work for objects larger than 50MB, but for MOBs, many small files lead to inefficient HDFS usage since the default block size in HDFS is 128MB.

For example, let’s say a NameNode has 48GB of memory and each file is 100KB with three replicas. Each file takes more than 300 bytes in memory, so a NameNode with 48GB memory can hold about 160 million files, which would limit us to only storing 16TB MOB files in total.

另外一種方式可以采用HBase+HDFS的方式來分開存儲元數據和MOB文件。一個文件對應一個Hbase入口。這是客戶端的解決方案,事務在客戶端控制。MOB不會消耗Hbase的內存。存儲的對象可以超過50MB。但是,大量的小文件使HDFS利用率不高,因為默認的塊大小是128M。

舉個例子,NameNode有48G內存,每個文件100KB,3個副本。每個文件在內存中占用300字節,48G內存可以存大約1.6億文件,限制了存儲的總文件大小僅僅16T。

As an improvement, we could have assembled the small MOB files into bigger ones—that is, a file could have multiple MOB entries–and store the offset and length in the HBase table for fast reading. However, maintaining data consistency and managing deleted MOBs and small MOB files in compactions are difficult. Furthermore, if we were to use this approach, we’d have to consider new security policies, lose atomicity properties of writes, and potentially lose the backup and disaster recovery provided by replication and snapshots.

我們可以許多小的MOB合成一個大文件,一個文件有多個MOB入口,通過存儲偏移量(offset)和長度來加快讀取。不過維護數據一致性,管理刪除的文件和壓縮后的小文件十分困難。而且,我們還需要考慮安全策略,失去寫數據的原子性,可能會丟失由復制和快照提供的備份和災難恢復。

HBase MOB 架構設計

In the end, because most of the concerns around storing MOBs in HBase involve the I/O created by compactions, the key was to move MOBs out of management by normal regions to avoid region splits and compactions there.

The HBase MOB design is similar to the HBase + HDFS approach because we store the metadata and MOBs separately. However, the difference lies in a server-side design: memstore caches the MOBs before they are flushed to disk, the MOBs are written into a HFile called “MOB file” in each flush, and each MOB file has multiple entries instead of single file in HDFS for each MOB. This MOB file is stored in a special region. All the read and write can be used by the current HBase APIs.

最后,由于大部分擔心來自于壓縮帶來的IO,最關鍵的是將MOB移出正常region的管理來避免region分割和壓縮。

HBase MOB設計類似于Hbase+HDFS的方式,將元數據和MOB分開存。不同的是服務端的設計。中等大小文件在被刷到磁盤前緩存在memstore里,每次刷新,中等大小文件被寫入特殊的HFile文件—“MOB File”。每個中等文件有多個MOB入口,而不像HDFS只有一個入口。MOB file被放在特殊的region。讀寫都通過現有的Hbase API。

?

未完,見下一篇:https://my.oschina.net/u/234661/blog/1553060

轉載于:https://my.oschina.net/u/234661/blog/1553005

總結

以上是生活随笔為你收集整理的【翻译】Apache Hbase新特性--MOB支持(一)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。