日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

Canal Mysql binlog 同步至 Hbase ES

發布時間:2024/8/23 数据库 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Canal Mysql binlog 同步至 Hbase ES 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 一、Canal介紹
    • 工作原理
      • canal 工作原理
  • 二、下載
  • 三、安裝使用
    • Mysql準備
    • canal 安裝
      • 解壓縮 canal-deployer
      • 配置修改
      • 啟動
      • 查看server日志
      • 查看instance日志
      • 服務停止
    • canal-client使用
    • Canal Adapter
      • 數據同步Hbase
      • 數據同步ElasticSearch

一、Canal介紹

早期阿里巴巴因為杭州和美國雙機房部署,存在跨機房同步的業務需求,實現方式主要是基于業務 trigger 獲取增量變更。從 2010 年開始,業務逐步嘗試數據庫日志解析獲取增量變更進行同步,由此衍生出了大量的數據庫增量訂閱和消費業務。

基于日志增量訂閱和消費的業務包括

  • 數據庫鏡像
  • 數據庫實時備份
  • 索引構建和實時維護(拆分異構索引、倒排索引等)
  • 業務 cache 刷新
  • 帶業務邏輯的增量數據處理
    當前的 canal 支持源端 MySQL 版本包括 5.1.x , 5.5.x , 5.6.x , 5.7.x , 8.0.x

工作原理

  • MySQL master 將數據變更寫入二進制日志( binary log, 其中記錄叫做二進制日志事件binary log events,可以通過 show binlog events 進行查看)
  • MySQL slave 將 master 的 binary log events 拷貝到它的中繼日志(relay log)
  • MySQL slave 重放 relay log 中事件,將數據變更反映它自己的數據

canal 工作原理

  • canal 模擬 MySQL slave 的交互協議,偽裝自己為 MySQL slave ,向 - MySQL master 發送dump 協議
  • MySQL master 收到 dump 請求,開始推送 binary log 給 slave (即 canal )
  • canal 解析 binary log 對象(原始為 byte 流)

GitHub : https://github.com/alibaba/canal

二、下載

下載地址:https://github.com/alibaba/canal/tags
這里我們使用 v1.1.5版本 ,點擊下載

網盤地址: 鏈接: https://pan.baidu.com/s/1VjIzpb79d05CET5xEnwdEQ 提取碼: h0bk

三、安裝使用

Mysql準備

  • 對于自建 MySQL , 需要先開啟 Binlog 寫入功能,配置 binlog-format 為 ROW 模式,my.cnf 中配置如下
[mysqld] log-bin=mysql-bin # 開啟 binlog binlog-format=ROW # 選擇 ROW 模式 server_id=1 # 配置 MySQL replaction 需要定義,不要和 canal 的 slaveId 重復
  • 授權 canal 鏈接 MySQL 賬號具有作為 MySQL slave 的權限, 如果已有賬戶可直接 grant
CREATE USER canal IDENTIFIED BY 'canal'; GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; -- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ; FLUSH PRIVILEGES;

canal 安裝

解壓縮 canal-deployer

tar -zxvf canal.deployer-1.1.5.tar.gz

解壓后目錄結構如下

drwxr-xr-x 2 root root 76 Sep 18 16:58 bin drwxr-xr-x 5 root root 123 Sep 18 16:58 conf drwxr-xr-x 2 root root 4096 Sep 18 16:58 lib drwxrwxrwx 2 root root 6 Apr 19 16:15 logs drwxrwxrwx 2 root root 177 Apr 19 16:15 plugin

配置修改

  • 修改 confg/canal.properties
################################################# ######### common argument ############# ################################################# # tcp bind ip # canal server綁定的本地IP信息,如果不配置,默認選擇一個本機IP進行啟動服務,默認:無 canal.ip = # register ip to zookeeper # 運行canal-server服務的主機IP,可以不用配置,他會自動綁定一個本機的IP canal.register.ip = # canal-server監聽的端口(TCP模式下,非TCP模式不監聽1111端口) canal.port = 11111 # canal-server metrics.pull監聽的端口 canal.metrics.pull.port = 11112 # canal instance user/passwd # canal.user = canal # canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458# canal admin config #canal.admin.manager = 127.0.0.1:8089 canal.admin.port = 11110 canal.admin.user = admin canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441 # admin auto register #canal.admin.register.auto = true #canal.admin.register.cluster = #canal.admin.register.name =# canal server 鏈接zookeeper集群的鏈接信息,集群模式下要配置zookeeper進行協調配置,單機模式可以不用配置 canal.zkServers = # flush data to zk canal持久化數據到zookeeper上的更新頻率,單位毫秒 canal.zookeeper.flush.period = 1000 canal.withoutNetty = false # tcp, kafka, rocketMQ, rabbitMQ canal-server運行的模式,TCP模式就是直連客戶端,不經過中間件。kafka和mq是消息隊列的模式 canal.serverMode = tcp # flush meta cursor/parse position to file 存放數據的路徑 canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000 ## memory store RingBuffer size, should be Math.pow(2,n) canal.instance.memory.buffer.size = 16384 ## memory store RingBuffer used memory unit size , default 1kb 下面是一些系統參數的配置,包括內存、網絡等 canal.instance.memory.buffer.memunit = 1024 ## meory store gets mode used MEMSIZE or ITEMSIZE canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true## detecing config 這里是心跳檢查的配置,做HA時會用到 canal.instance.detecting.enable = false #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now() canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery canal.instance.transaction.size = 1024 # mysql fallback connected to new master should fallback times canal.instance.fallbackIntervalInSeconds = 60# network config canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30# binlog filter config binlog過濾的配置,指定過濾那些SQL canal.instance.filter.druid.ddl = true # 是否忽略DCL的query語句,比如grant/create user等,默認false canal.instance.filter.query.dcl = false # 是否忽略DML的query語句,比如insert/update/delete table.(mysql5.6的ROW模式可以包含statement模式的query記錄),默認false canal.instance.filter.query.dml = false # 是否忽略DDL的query語句,比如create table/alater table/drop table/rename table/create index/drop index. # (目前支持的ddl類型主要為table級別的操作,create databases/trigger/procedure暫時劃分為dcl類型),默認false canal.instance.filter.query.ddl = false canal.instance.filter.table.error = false canal.instance.filter.rows = false canal.instance.filter.transaction.entry = false canal.instance.filter.dml.insert = false canal.instance.filter.dml.update = false canal.instance.filter.dml.delete = false# binlog format/image check binlog格式檢測,使用ROW模式,非ROW模式也不會報錯,但是同步不到數據 canal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB# binlog ddl isolation canal.instance.get.ddl.isolation = false# parallel parser config 并行解析配置,如果是單個CPU就把下面這個true改為false canal.instance.parser.parallel = true ## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors() #canal.instance.parser.parallelThreadSize = 16 ## disruptor ringbuffer size, must be power of 2 canal.instance.parser.parallelBufferSize = 256# table meta tsdb info canal.instance.tsdb.enable = true canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername = canal canal.instance.tsdb.dbPassword = canal # dump snapshot interval, default 24 hour canal.instance.tsdb.snapshot.interval = 24 # purge snapshot expire , default 360 hour(15 days) canal.instance.tsdb.snapshot.expire = 360################################################# ######### destinations ############# ################################################# # canal-server創建的實例,在這里指定你要創建的實例的名字,比如test1,test2等,逗號隔開 canal.destinations = example # conf root dir canal.conf.dir = ../conf # auto scan instance dir add/remove and start/stop instance canal.auto.scan = true canal.auto.scan.interval = 5 # set this value to 'true' means that when binlog pos not found, skip to latest. # WARN: pls keep 'false' in production env, or if you know what you want. canal.auto.reset.latest.pos.mode = falsecanal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml #canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xmlcanal.instance.global.mode = spring canal.instance.global.lazy = false canal.instance.global.manager.address = ${canal.admin.manager} #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml canal.instance.global.spring.xml = classpath:spring/file-instance.xml #canal.instance.global.spring.xml = classpath:spring/default-instance.xml################################################## ######### MQ Properties ############# ################################################## # aliyun ak/sk , support rds/mq canal.aliyun.accessKey = canal.aliyun.secretKey = canal.aliyun.uid=canal.mq.flatMessage = true canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 # Set this value to "cloud", if you want open message trace feature in aliyun. canal.mq.accessChannel = localcanal.mq.database.hash = true canal.mq.send.thread.size = 30 canal.mq.build.thread.size = 8################################################## ######### Kafka ############# ################################################## kafka.bootstrap.servers = 127.0.0.1:9092 kafka.acks = all kafka.compression.type = none kafka.batch.size = 16384 kafka.linger.ms = 1 kafka.max.request.size = 1048576 kafka.buffer.memory = 33554432 kafka.max.in.flight.requests.per.connection = 1 kafka.retries = 0kafka.kerberos.enable = false kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf" kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"################################################## ######### RocketMQ ############# ################################################## rocketmq.producer.group = test rocketmq.enable.message.trace = false rocketmq.customized.trace.topic = rocketmq.namespace = rocketmq.namesrv.addr = 127.0.0.1:9876 rocketmq.retry.times.when.send.failed = 0 rocketmq.vip.channel.enabled = false rocketmq.tag = ################################################## ######### RabbitMQ ############# ################################################## rabbitmq.host = rabbitmq.virtual.host = rabbitmq.exchange = rabbitmq.username = rabbitmq.password = rabbitmq.deliveryMode =
  • 修改example配置
    在 confg/canal.properties配置了實例后,需要在根配置的同級目錄下創建該實例目錄,并創建文件 instance.properties。(example是官方給的Demo)
    內容如下:
################################################# ## mysql serverId , v1.0.26+ will autoGen ## v1.0.26版本后會自動生成slaveId,所以可以不用配置 # canal.instance.mysql.slaveId=0# enable gtid use true/false canal.instance.gtidon=false# position info # 數據庫地址 canal.instance.master.address=127.0.0.1:3306 # binlog日志名稱 canal.instance.master.journal.name= # mysql主庫鏈接時起始的binlog偏移量 canal.instance.master.position= # mysql主庫鏈接時起始的binlog的時間戳 canal.instance.master.timestamp= canal.instance.master.gtid=# rds oss binlog canal.instance.rds.accesskey= canal.instance.rds.secretkey= canal.instance.rds.instanceId=# table meta tsdb info canal.instance.tsdb.enable=true #canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb #canal.instance.tsdb.dbUsername=canal #canal.instance.tsdb.dbPassword=canal#canal.instance.standby.address = #canal.instance.standby.journal.name = #canal.instance.standby.position = #canal.instance.standby.timestamp = #canal.instance.standby.gtid=# username/password canal.instance.dbUsername=canal canal.instance.dbPassword=canal # canal.instance.connectionCharset 代表數據庫的編碼方式對應到 java 中的編碼類型,比如 UTF-8,GBK , ISO-8859-1 canal.instance.connectionCharset = UTF-8 # enable druid Decrypt database password canal.instance.enableDruid=false #canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==# table regex # 配置監聽,支持正則表達式 # mysql 數據解析關注的表,Perl正則表達式.多個正則之間以逗號(,)分隔,轉義符需要雙斜杠(\\) # 常見例子: # 1. 所有表:.* or .*\\..* # 2. canal schema下所有表: canal\\..* # 3. canal下的以canal打頭的表:canal\\.canal.* # 4. canal schema下的一張表:canal.test1 # 5. 多個規則組合使用:canal\\..*,mysql.test1,mysql.test2 (逗號分隔) # 這個是比較重要的參數,匹配庫表白名單,比如我只要test庫的user表的增量數據,則這樣寫 test.user canal.instance.filter.regex=.*\\..* # table black regex # 配置不監聽,支持正則表達式 canal.instance.filter.black.regex=mysql\\.slave_.* # table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2) #canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch # table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2) #canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch# mq config canal.mq.topic=example # dynamic topic route by schema or table regex #canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..* canal.mq.partition=0 # hash partition config #canal.mq.partitionsNum=3 #canal.mq.partitionHash=test.table:id^name,.*\\..* #canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6 #################################################

啟動

sh bin/startup.sh

查看server日志

# tailf logs/canal/canal.log 2021-09-19 09:38:26.746 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler 2021-09-19 09:38:26.793 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations 2021-09-19 09:38:26.812 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server. 2021-09-19 09:38:26.874 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[192.168.168.2(192.168.168.2):11111] 2021-09-19 09:38:28.240 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ......

查看instance日志

# tailf logs/example/example.log 2021-09-19 09:38:28.191 [main] INFO c.a.otter.canal.instance.spring.CanalInstanceWithSpring - start CannalInstance for 1-example 2021-09-19 09:38:28.202 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^.*\..*$ 2021-09-19 09:38:28.202 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table black filter : ^mysql\.slave_.*$ 2021-09-19 09:38:28.207 [main] INFO c.a.otter.canal.instance.core.AbstractCanalInstance - start successful....

服務停止

sh bin/stop.sh

canal-client使用

  • manve引用
<dependency><groupId>com.alibaba.otter</groupId><artifactId>canal.client</artifactId><version>1.1.0</version></dependency>
  • ClientSample.java
import java.net.InetSocketAddress; import java.util.List;import com.alibaba.otter.canal.client.CanalConnectors; import com.alibaba.otter.canal.client.CanalConnector; import com.alibaba.otter.canal.common.utils.AddressUtils; import com.alibaba.otter.canal.protocol.Message; import com.alibaba.otter.canal.protocol.CanalEntry.Column; import com.alibaba.otter.canal.protocol.CanalEntry.Entry; import com.alibaba.otter.canal.protocol.CanalEntry.EntryType; import com.alibaba.otter.canal.protocol.CanalEntry.EventType; import com.alibaba.otter.canal.protocol.CanalEntry.RowChange; import com.alibaba.otter.canal.protocol.CanalEntry.RowData; /*** @author Jast* @description* @date 2021-09-19 09:43*/ public class ClientSample {public static void main(String args[]) {// 創建鏈接 // CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress(AddressUtils.getHostIp(), // 11111), "example", "", "");CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress("192.168.168.2",11111), "example", "", "");int batchSize = 1000;int emptyCount = 0;try {connector.connect();connector.subscribe(".*\\..*");connector.rollback();int totalEmptyCount = 120;while (emptyCount < totalEmptyCount) {Message message = connector.getWithoutAck(batchSize); // 獲取指定數量的數據long batchId = message.getId();int size = message.getEntries().size();if (batchId == -1 || size == 0) {emptyCount++;System.out.println("empty count : " + emptyCount);try {Thread.sleep(1000);} catch (InterruptedException e) {}} else {emptyCount = 0;// System.out.printf("message[batchId=%s,size=%s] \n", batchId, size);printEntry(message.getEntries());}connector.ack(batchId); // 提交確認// connector.rollback(batchId); // 處理失敗, 回滾數據}System.out.println("empty too many times, exit");} finally {connector.disconnect();}}private static void printEntry(List<Entry> entrys) {for (Entry entry : entrys) {if (entry.getEntryType() == EntryType.TRANSACTIONBEGIN || entry.getEntryType() == EntryType.TRANSACTIONEND) {continue;}RowChange rowChage = null;try {rowChage = RowChange.parseFrom(entry.getStoreValue());} catch (Exception e) {throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(),e);}EventType eventType = rowChage.getEventType();System.out.println(String.format("================&gt; binlog[%s:%s] , name[%s,%s] , eventType : %s",entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),entry.getHeader().getSchemaName(), entry.getHeader().getTableName(),eventType));for (RowData rowData : rowChage.getRowDatasList()) {if (eventType == EventType.DELETE) {printColumn(rowData.getBeforeColumnsList());} else if (eventType == EventType.INSERT) {printColumn(rowData.getAfterColumnsList());} else {System.out.println("-------&gt; before");printColumn(rowData.getBeforeColumnsList());System.out.println("-------&gt; after");printColumn(rowData.getAfterColumnsList());}}}}private static void printColumn(List<Column> columns) {for (Column column : columns) {System.out.println(column.getName() + " : " + column.getValue() + " update=" + column.getUpdated());}}}

此時數據庫相關操作會在控制臺輸出

================&gt; binlog[mysql-bin.000003:834] , name[mysql,test] , eventType : CREATE

Canal Adapter

  • 解壓壓縮包
mkdir canal-adapter && tar -zxvf canal.adapter-1.1.5.tar.gz -C canal-adapter

數據同步Hbase

  • 1.修改啟動器配置:{canal-apapter}/conf/application.yml
server:port: 8081 logging:level:com.alibaba.otter.canal.client.adapter: DEBUGcom.alibaba.otter.canal.client.adapter.hbase: DEBUG spring:jackson:date-format: yyyy-MM-dd HH:mm:sstime-zone: GMT+8default-property-inclusion: non_null canal.conf:# tcp kafka rocketMQ rabbitMQ canal-server運行的模式,TCP模式就是直連客戶端,不經過中間件。kafka和mq是消息隊列的模式mode: tcp # flatMessage: truezookeeperHosts: syncBatchSize: 1retries: 0timeout: 1000accessKey:secretKey:consumerProperties:# canal tcp consumer 指定canal-server的地址和端口canal.tcp.server.host: 127.0.0.1:11111canal.tcp.zookeeper.hosts: 127.0.0.1:2181canal.tcp.batch.size: 1canal.tcp.username:canal.tcp.password:srcDataSources: # 數據源配置,從哪里獲取數據defaultDS: # 指定一個名字,在ES的配置中會用到,唯一url: jdbc:mysql://127.0.0.1:3306/test2?useUnicode=trueusername: rootpassword: *****canalAdapters:- instance: example # canal instance Name or mq topic name 指定在canal配置的實例名稱groups:- groupId: g1 outerAdapters:- name: logger # - name: rdb # key: mysql1 # properties: # jdbc.driverClassName: com.mysql.jdbc.Driver # jdbc.url: jdbc:mysql://127.0.0.1:3306/mytest2?useUnicode=true # jdbc.username: root # jdbc.password: 121212 # - name: rdb # key: oracle1 # properties: # jdbc.driverClassName: oracle.jdbc.OracleDriver # jdbc.url: jdbc:oracle:thin:@localhost:49161:XE # jdbc.username: mytest # jdbc.password: m121212 # - name: rdb # key: postgres1 # properties: # jdbc.driverClassName: org.postgresql.Driver # jdbc.url: jdbc:postgresql://localhost:5432/postgres # jdbc.username: postgres # jdbc.password: 121212 # threads: 1 # commitSize: 3000- name: hbase # config目錄下的子目錄名稱properties:hbase.zookeeper.quorum: sangfor.abdi.node3,sangfor.abdi.node2,sangfor.abdi.node1hbase.zookeeper.property.clientPort: 2181zookeeper.znode.parent: /hbase-unsecure # 這里是hbase在Zookeeper元信息的目錄 # - name: es7 # hosts: 127.0.0.1:9300 # 127.0.0.1:9200 for rest mode # properties: # mode: transport # or rest # # security.auth: test:123456 # only used for rest mode # cluster.name: my_application # - name: kudu # key: kudu # properties: # kudu.master.address: 127.0.0.1 # ',' split multi address

注意:adapter將會自動加載 conf/hbase 下的所有.yml結尾的配置文件

  • 2.Hbase表映射文件
    修改 conf/hbase/mytest_person.yml文件:
dataSourceKey: defaultDS # 對應application.yml中的datasourceConfigs下的配置 destination: example # 對應tcp模式下的canal instance或者MQ模式下的topic groupId: # !!! 注意,同步Hbase數據這里groupId不要填寫內容,對應MQ模式下的groupId, 只會同步對應groupId的數據 hbaseMapping: # mysql--HBase的單表映射配置mode: STRING # HBase中的存儲類型, 默認統一存為String, 可選: #PHOENIX #NATIVE #STRING # NATIVE: 以java類型為主, PHOENIX: 將類型轉換為Phoenix對應的類型destination: example # 對應 canal destination/MQ topic 名稱database: mytest # 數據庫名/schema名table: person # 表名hbaseTable: MYTEST.PERSON # HBase表名family: CF # 默認統一Column Family名稱uppercaseQualifier: true # 字段名轉大寫, 默認為truecommitBatch: 3000 # 批量提交的大小, ETL中用到#rowKey: id,type # 復合字段rowKey不能和columns中的rowKey并存# 復合rowKey會以 '|' 分隔columns: # 字段映射, 如果不配置將自動映射所有字段, # 并取第一個字段為rowKey, HBase字段名以mysql字段名為主id: ROWKE name: CF:NAMEemail: EMAIL # 如果column family為默認CF, 則可以省略type: # 如果HBase字段和mysql字段名一致, 則可以省略c_time: birthday:

注意: 如果涉及到類型轉換,可以如下形式:

...columns: id: ROWKE$STRING ... type: TYPE$BYTE ...

類型轉換涉及到Java類型和Phoenix類型兩種, 分別定義如下:

#Java 類型轉換, 對應配置 mode: NATIVE $DEFAULT $STRING $INTEGER $LONG $SHORT $BOOLEAN $FLOAT $DOUBLE $BIGDECIMAL $DATE $BYTE $BYTES #Phoenix 類型轉換, 對應配置 mode: PHOENIX $DEFAULT 對應PHOENIX里的VARCHAR $UNSIGNED_INT 對應PHOENIX里的UNSIGNED_INT 4字節 $UNSIGNED_LONG 對應PHOENIX里的UNSIGNED_LONG 8字節 $UNSIGNED_TINYINT 對應PHOENIX里的UNSIGNED_TINYINT 1字節 $UNSIGNED_SMALLINT 對應PHOENIX里的UNSIGNED_SMALLINT 2字節 $UNSIGNED_FLOAT 對應PHOENIX里的UNSIGNED_FLOAT 4字節 $UNSIGNED_DOUBLE 對應PHOENIX里的UNSIGNED_DOUBLE 8字節 $INTEGER 對應PHOENIX里的INTEGER 4字節 $BIGINT 對應PHOENIX里的BIGINT 8字節 $TINYINT 對應PHOENIX里的TINYINT 1字節 $SMALLINT 對應PHOENIX里的SMALLINT 2字節 $FLOAT 對應PHOENIX里的FLOAT 4字節 $DOUBLE 對應PHOENIX里的DOUBLE 8字節 $BOOLEAN 對應PHOENIX里的BOOLEAN 1字節 $TIME 對應PHOENIX里的TIME 8字節 $DATE 對應PHOENIX里的DATE 8字節 $TIMESTAMP 對應PHOENIX里的TIMESTAMP 12字節 $UNSIGNED_TIME 對應PHOENIX里的UNSIGNED_TIME 8字節 $UNSIGNED_DATE 對應PHOENIX里的UNSIGNED_DATE 8字節 $UNSIGNED_TIMESTAMP 對應PHOENIX里的UNSIGNED_TIMESTAMP 12字節 $VARCHAR 對應PHOENIX里的VARCHAR 動態長度 $VARBINARY 對應PHOENIX里的VARBINARY 動態長度 $DECIMAL 對應PHOENIX里的DECIMAL 動態長度

如果不配置將以java對象原生類型默認映射轉換

  • 3.啟動服務
啟動:bin/startup.sh 停止:bin/stop.sh 重啟:bin/restart.sh 日志目錄:logs/adapter/adapter.log
  • 4.驗證服務
    往mysql插入數據
INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now());

日志內容,可以看出我們寫入的數據已獲取到

2021-09-20 12:35:09.682 [pool-1-thread-1] INFO c.a.o.canal.client.adapter.logger.LoggerAdapterExample - DML: {"data":[{"id":"2286ed67-19cc-11ec-bbe0-708cb6f5eaa6","name":"2286ed83-19cc-11ec-bbe0-708cb6f5eaa6","age":2,"age_2":null,"message":null,"insert_time":1632112508000}],"database":"test2","destination":"example","es":1632112508000,"groupId":"g1","isDdl":false,"old":null,"pkNames":["id"],"sql":"","table":"testsync","ts":1632112509680,"type":"INSERT"} 2021-09-20 12:35:09.689 [pool-1-thread-1] DEBUG c.a.o.c.client.adapter.hbase.service.HbaseSyncService - DML: {"data":[{"id":"2286ed67-19cc-11ec-bbe0-708cb6f5eaa6","name":"2286ed83-19cc-11ec-bbe0-708cb6f5eaa6","age":2,"age_2":null,"message":null,"insert_time":1632112508000}],"database":"test2","destination":"example","es":1632112508000,"groupId":"g1","isDdl":false,"old":null,"pkNames":["id"],"sql":"","table":"testsync","ts":1632112509680,"type":"INSERT"}

查看Hbase表中的數據,發現寫入成功

hbase(main):036:0> scan 'testsync',{LIMIT=>1} ROW COLUMN+CELL 226ba6e8-19cc-11ec-bbe0-708c column=CF:AGE, timestamp=2021-09-20T12:35:08.548, value=2 b6f5eaa6 226ba6e8-19cc-11ec-bbe0-708c column=CF:INSERT_TIME, timestamp=2021-09-20T12:35:08.548, value=2021-09-20 12:35:08.0b6f5eaa6 226ba6e8-19cc-11ec-bbe0-708c column=CF:NAME, timestamp=2021-09-20T12:35:08.548, value=226ba718-19cc-11ec-bbe0-708cb6f5eaa6 b6f5eaa6 1 row(s) Took 0.0347 seconds

PS: 這個環節有個問題卡住很久,日志打印出數據,實際Hbase就是無法成功寫入。解決方法參考:https://blog.csdn.net/zhangshenghang/article/details/120411341

數據同步ElasticSearch

我們接著在之前配置Hbase基礎上直接修改配置,實現同時同步ElasticSearch

  • 1.修改啟動器配置 {canal-apapter}/conf/application.yml
server:port: 8081 logging:level:com.alibaba.otter.canal.client.adapter: DEBUGcom.alibaba.otter.canal.client.adapter.hbase: DEBUG spring:jackson:date-format: yyyy-MM-dd HH:mm:sstime-zone: GMT+8default-property-inclusion: non_null canal.conf:# tcp kafka rocketMQ rabbitMQ canal-server運行的模式,TCP模式就是直連客戶端,不經過中間件。kafka和mq是消息隊列的模式mode: tcp # flatMessage: truezookeeperHosts: syncBatchSize: 1retries: 0timeout: 1000accessKey:secretKey:consumerProperties:# canal tcp consumer 指定canal-server的地址和端口canal.tcp.server.host: 127.0.0.1:11111canal.tcp.zookeeper.hosts: 127.0.0.1:2181canal.tcp.batch.size: 1canal.tcp.username:canal.tcp.password:srcDataSources: # 數據源配置,從哪里獲取數據defaultDS: # 指定一個名字,在ES的配置中會用到,唯一url: jdbc:mysql://127.0.0.1:3306/test2?useUnicode=trueusername: rootpassword: *****canalAdapters:- instance: example # canal instance Name or mq topic name 指定在canal配置的實例名稱groups:- groupId: g1 outerAdapters:- name: logger # - name: rdb # key: mysql1 # properties: # jdbc.driverClassName: com.mysql.jdbc.Driver # jdbc.url: jdbc:mysql://127.0.0.1:3306/mytest2?useUnicode=true # jdbc.username: root # jdbc.password: 121212 # - name: rdb # key: oracle1 # properties: # jdbc.driverClassName: oracle.jdbc.OracleDriver # jdbc.url: jdbc:oracle:thin:@localhost:49161:XE # jdbc.username: mytest # jdbc.password: m121212 # - name: rdb # key: postgres1 # properties: # jdbc.driverClassName: org.postgresql.Driver # jdbc.url: jdbc:postgresql://localhost:5432/postgres # jdbc.username: postgres # jdbc.password: 121212 # threads: 1 # commitSize: 3000- name: hbaseproperties:hbase.zookeeper.quorum: sangfor.abdi.node3,sangfor.abdi.node2,sangfor.abdi.node1hbase.zookeeper.property.clientPort: 2181zookeeper.znode.parent: /hbase-unsecure- name: es7 # config目錄下的子目錄名稱hosts: 192.168.168.2:9300 # 127.0.0.1:9200 for rest modeproperties:mode: transport # or rest # # security.auth: test:123456 # only used for rest modecluster.name: my_application # - name: kudu # key: kudu # properties: # kudu.master.address: 127.0.0.1 # ',' split multi address
  • 2.ElasticSearch 表映射文件
# 指定數據源,這個值和adapter的application.yml文件中配置的srcDataSources值對應。 dataSourceKey: defaultDS # 指定canal-server中配置的某個實例的名字,不同實例對應不同業務 destination: example # 組ID ,tcp方式這里填寫空,不要填寫值,不然可能會接收不到數據 groupId: # ES的mapping(映射) esMapping:# ES索引名稱_index: testsync2# ES標示文檔的唯一標示,通常對應數據表中的主鍵ID字段_id: _id # upsert: true # pk: id # 數據表每個字段映射到表中的具體名稱,不能重復sql: "select a.id as _id, a.name,a.age,a.age_2,a.message,a.insert_time from testsync as a" # objFields: # _labels: array:; # etlCondition: "where a.c_time>={}"commitBatch: 10
  • 3 重啟服務
bin/restart.sh

寫入數據

INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now()); INSERT INTO testsync(id,name,age,insert_time) values(UUID(),UUID(),2,now());

查看adapter日志

2021-09-20 13:53:07.279 [pool-1-thread-1] INFO c.a.o.canal.client.adapter.logger.LoggerAdapterExample - DML: {"data":[{"id":"05fabf89-19d7-11ec-bbe0-708cb6f5eaa6","name":"05fabfb4-19d7-11ec-bbe0-708cb6f5eaa6","age":2,"age_2":null,"message":null,"insert_time":1632117185000}],"database":"test2","destination":"example","es":1632117185000,"groupId":"g1","isDdl":false,"old":null,"pkNames":["id"],"sql":"","table":"testsync","ts":1632117187278,"type":"INSERT"} 2021-09-20 13:53:07.286 [pool-1-thread-1] DEBUG c.a.o.c.client.adapter.hbase.service.HbaseSyncService - DML: {"data":[{"id":"05fabf89-19d7-11ec-bbe0-708cb6f5eaa6","name":"05fabfb4-19d7-11ec-bbe0-708cb6f5eaa6","age":2,"age_2":null,"message":null,"insert_time":1632117185000}],"database":"test2","destination":"example","es":1632117185000,"groupId":"g1","isDdl":false,"old":null,"pkNames":["id"],"sql":"","table":"testsync","ts":1632117187278,"type":"INSERT"} 2021-09-20 13:53:07.287 [pool-1-thread-1] DEBUG c.a.o.canal.client.adapter.es.core.service.ESSyncService - DML: {"data":[{"id":"05fabf89-19d7-11ec-bbe0-708cb6f5eaa6","name":"05fabfb4-19d7-11ec-bbe0-708cb6f5eaa6","age":2,"age_2":null,"message":null,"insert_time":1632117185000}],"database":"test2","destination":"example","es":1632117185000,"groupId":"g1","isDdl":false,"old":null,"pkNames":["id"],"sql":"","table":"testsync","ts":1632117187278,"type":"INSERT"} Affected indexes: testsync2

查看ElasticSearch數據

至此寫入ElasticSearch、Hbase成功

總結

以上是生活随笔為你收集整理的Canal Mysql binlog 同步至 Hbase ES的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。