日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

聊聊flink的logback配置

發布時間:2025/4/5 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 聊聊flink的logback配置 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

本文主要研究一下flink的logback配置

client端pom文件配置

<dependencies><!-- Add the two required logback dependencies --><dependency><groupId>ch.qos.logback</groupId><artifactId>logback-core</artifactId><version>1.2.3</version></dependency><dependency><groupId>ch.qos.logback</groupId><artifactId>logback-classic</artifactId><version>1.2.3</version></dependency><!-- Add the log4j -> sfl4j (-> logback) bridge into the classpathHadoop is logging to log4j! --><dependency><groupId>org.slf4j</groupId><artifactId>log4j-over-slf4j</artifactId><version>1.7.15</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-java</artifactId><version>1.7.1</version><exclusions><exclusion><groupId>log4j</groupId><artifactId>*</artifactId></exclusion><exclusion><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId></exclusion></exclusions></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-java_2.11</artifactId><version>1.7.1</version><exclusions><exclusion><groupId>log4j</groupId><artifactId>*</artifactId></exclusion><exclusion><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId></exclusion></exclusions></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients_2.11</artifactId><version>1.7.1</version><exclusions><exclusion><groupId>log4j</groupId><artifactId>*</artifactId></exclusion><exclusion><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId></exclusion></exclusions></dependency> </dependencies>
  • 添加logback-core、logback-classic及log4j-over-slf4j依賴,之后對flink-java、flink-streaming-java_2.11、flink-clients_2.11等配置log4j及slf4j-log4j12的exclusions;最后通過mvn dependency:tree查看是否還有log4j12,以確認下是否都全部排除了

服務端配置

  • 添加logback-classic.jar、logback-core.jar、log4j-over-slf4j.jar到flink的lib目錄下(比如/opt/flink/lib)
  • 移除flink的lib目錄下(比如/opt/flink/lib)log4j及slf4j-log4j12的jar(比如log4j-1.2.17.jar及slf4j-log4j12-1.7.15.jar)
  • 如果要自定義logback的配置的話,可以覆蓋flink的conf目錄下的logback.xml、logback-console.xml或者logback-yarn.xml

flink-daemon.sh

flink-release-1.7.1/flink-dist/src/main/flink-bin/bin/flink-daemon.sh

#!/usr/bin/env bash ################################################################################ # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################# Start/stop a Flink daemon. USAGE="Usage: flink-daemon.sh (start|stop|stop-all) (taskexecutor|zookeeper|historyserver|standalonesession|standalonejob) [args]"STARTSTOP=$1 DAEMON=$2 ARGS=("${@:3}") # get remaining arguments as arraybin=`dirname "$0"` bin=`cd "$bin"; pwd`. "$bin"/config.shcase $DAEMON in(taskexecutor)CLASS_TO_RUN=org.apache.flink.runtime.taskexecutor.TaskManagerRunner;;(zookeeper)CLASS_TO_RUN=org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer;;(historyserver)CLASS_TO_RUN=org.apache.flink.runtime.webmonitor.history.HistoryServer;;(standalonesession)CLASS_TO_RUN=org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint;;(standalonejob)CLASS_TO_RUN=org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint;;(*)echo "Unknown daemon '${DAEMON}'. $USAGE."exit 1;; esacif [ "$FLINK_IDENT_STRING" = "" ]; thenFLINK_IDENT_STRING="$USER" fiFLINK_TM_CLASSPATH=`constructFlinkClassPath`pid=$FLINK_PID_DIR/flink-$FLINK_IDENT_STRING-$DAEMON.pidmkdir -p "$FLINK_PID_DIR"# Log files for daemons are indexed from the process ID's position in the PID # file. The following lock prevents a race condition during daemon startup # when multiple daemons read, index, and write to the PID file concurrently. # The lock is created on the PID directory since a lock file cannot be safely # removed. The daemon is started with the lock closed and the lock remains # active in this script until the script exits. command -v flock >/dev/null 2>&1 if [[ $? -eq 0 ]]; thenexec 200<"$FLINK_PID_DIR"flock 200 fi# Ascending ID depending on number of lines in pid file. # This allows us to start multiple daemon of each type. id=$([ -f "$pid" ] && echo $(wc -l < "$pid") || echo "0")FLINK_LOG_PREFIX="${FLINK_LOG_DIR}/flink-${FLINK_IDENT_STRING}-${DAEMON}-${id}-${HOSTNAME}" log="${FLINK_LOG_PREFIX}.log" out="${FLINK_LOG_PREFIX}.out"log_setting=("-Dlog.file=${log}" "-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j.properties" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback.xml")JAVA_VERSION=$(${JAVA_RUN} -version 2>&1 | sed 's/.*version "\(.*\)\.\(.*\)\..*"/\1\2/; 1q')# Only set JVM 8 arguments if we have correctly extracted the version if [[ ${JAVA_VERSION} =~ ${IS_NUMBER} ]]; thenif [ "$JAVA_VERSION" -lt 18 ]; thenJVM_ARGS="$JVM_ARGS -XX:MaxPermSize=256m"fi ficase $STARTSTOP in(start)# Rotate log filesrotateLogFilesWithPrefix "$FLINK_LOG_DIR" "$FLINK_LOG_PREFIX"# Print a warning if daemons are already running on hostif [ -f "$pid" ]; thenactive=()while IFS='' read -r p || [[ -n "$p" ]]; dokill -0 $p >/dev/null 2>&1if [ $? -eq 0 ]; thenactive+=($p)fidone < "${pid}"count="${#active[@]}"if [ ${count} -gt 0 ]; thenecho "[INFO] $count instance(s) of $DAEMON are already running on $HOSTNAME."fifi# Evaluate user options for local variable expansionFLINK_ENV_JAVA_OPTS=$(eval echo ${FLINK_ENV_JAVA_OPTS})echo "Starting $DAEMON daemon on host $HOSTNAME."$JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} "${ARGS[@]}" > "$out" 200<&- 2>&1 < /dev/null &mypid=$!# Add to pid file if successful startif [[ ${mypid} =~ ${IS_NUMBER} ]] && kill -0 $mypid > /dev/null 2>&1 ; thenecho $mypid >> "$pid"elseecho "Error starting $DAEMON daemon."exit 1fi;;(stop)if [ -f "$pid" ]; then# Remove last in pid fileto_stop=$(tail -n 1 "$pid")if [ -z $to_stop ]; thenrm "$pid" # If all stopped, clean up pid fileecho "No $DAEMON daemon to stop on host $HOSTNAME."elsesed \$d "$pid" > "$pid.tmp" # all but last line# If all stopped, clean up pid file[ $(wc -l < "$pid.tmp") -eq 0 ] && rm "$pid" "$pid.tmp" || mv "$pid.tmp" "$pid"if kill -0 $to_stop > /dev/null 2>&1; thenecho "Stopping $DAEMON daemon (pid: $to_stop) on host $HOSTNAME."kill $to_stopelseecho "No $DAEMON daemon (pid: $to_stop) is running anymore on $HOSTNAME."fifielseecho "No $DAEMON daemon to stop on host $HOSTNAME."fi;;(stop-all)if [ -f "$pid" ]; thenmv "$pid" "${pid}.tmp"while read to_stop; doif kill -0 $to_stop > /dev/null 2>&1; thenecho "Stopping $DAEMON daemon (pid: $to_stop) on host $HOSTNAME."kill $to_stopelseecho "Skipping $DAEMON daemon (pid: $to_stop), because it is not running anymore on $HOSTNAME."fidone < "${pid}.tmp"rm "${pid}.tmp"fi;;(*)echo "Unexpected argument '$STARTSTOP'. $USAGE."exit 1;;esac
  • 使用flink-daemon.sh啟動的flink使用的logback配置文件是logback.xml

flink-console.sh

flink-release-1.7.1/flink-dist/src/main/flink-bin/bin/flink-console.sh

#!/usr/bin/env bash ################################################################################ # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################# Start a Flink service as a console application. Must be stopped with Ctrl-C # or with SIGTERM by kill or the controlling process. USAGE="Usage: flink-console.sh (taskexecutor|zookeeper|historyserver|standalonesession|standalonejob) [args]"SERVICE=$1 ARGS=("${@:2}") # get remaining arguments as arraybin=`dirname "$0"` bin=`cd "$bin"; pwd`. "$bin"/config.shcase $SERVICE in(taskexecutor)CLASS_TO_RUN=org.apache.flink.runtime.taskexecutor.TaskManagerRunner;;(historyserver)CLASS_TO_RUN=org.apache.flink.runtime.webmonitor.history.HistoryServer;;(zookeeper)CLASS_TO_RUN=org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer;;(standalonesession)CLASS_TO_RUN=org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint;;(standalonejob)CLASS_TO_RUN=org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint;;(*)echo "Unknown service '${SERVICE}'. $USAGE."exit 1;; esacFLINK_TM_CLASSPATH=`constructFlinkClassPath`log_setting=("-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j-console.properties" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback-console.xml")JAVA_VERSION=$(${JAVA_RUN} -version 2>&1 | sed 's/.*version "\(.*\)\.\(.*\)\..*"/\1\2/; 1q')# Only set JVM 8 arguments if we have correctly extracted the version if [[ ${JAVA_VERSION} =~ ${IS_NUMBER} ]]; thenif [ "$JAVA_VERSION" -lt 18 ]; thenJVM_ARGS="$JVM_ARGS -XX:MaxPermSize=256m"fi fiecho "Starting $SERVICE as a console application on host $HOSTNAME." exec $JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} "${ARGS[@]}"
  • 使用flink-console.sh啟動的flink使用的logback配置文件是logback-console.xml

yarn-session.sh

flink-release-1.7.1/flink-dist/src/main/flink-bin/yarn-bin/yarn-session.sh

#!/usr/bin/env bash ################################################################################ # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################bin=`dirname "$0"` bin=`cd "$bin"; pwd`# get Flink config . "$bin"/config.shif [ "$FLINK_IDENT_STRING" = "" ]; thenFLINK_IDENT_STRING="$USER" fiJVM_ARGS="$JVM_ARGS -Xmx512m"CC_CLASSPATH=`manglePathList $(constructFlinkClassPath):$INTERNAL_HADOOP_CLASSPATHS`log=$FLINK_LOG_DIR/flink-$FLINK_IDENT_STRING-yarn-session-$HOSTNAME.log log_setting="-Dlog.file="$log" -Dlog4j.configuration=file:"$FLINK_CONF_DIR"/log4j-yarn-session.properties -Dlogback.configurationFile=file:"$FLINK_CONF_DIR"/logback-yarn.xml"export FLINK_CONF_DIR$JAVA_RUN $JVM_ARGS -classpath "$CC_CLASSPATH" $log_setting org.apache.flink.yarn.cli.FlinkYarnSessionCli -j "$FLINK_LIB_DIR"/flink-dist*.jar "$@"
  • 使用yarn-session.sh啟動的flink使用的logback配置文件是logback-yarn.xml

小結

  • client端使用logback的話,要在pom文件添加logback-core、logback-classic及log4j-over-slf4j依賴,之后對flink-java、flink-streaming-java_2.11、flink-clients_2.11等配置log4j及slf4j-log4j12的exclusions;最后通過mvn dependency:tree查看是否還有log4j12,以確認下是否都全部排除了
  • 服務端使用logback的話,要在添加logback-classic.jar、logback-core.jar、log4j-over-slf4j.jar到flink的lib目錄下(比如/opt/flink/lib);移除flink的lib目錄下(比如/opt/flink/lib)log4j及slf4j-log4j12的jar(比如log4j-1.2.17.jar及slf4j-log4j12-1.7.15.jar);如果要自定義logback的配置的話,可以覆蓋flink的conf目錄下的logback.xml、logback-console.xml或者logback-yarn.xml
  • 使用flink-daemon.sh啟動的flink使用的logback配置文件是logback.xml;使用flink-console.sh啟動的flink使用的logback配置文件是logback-console.xml;使用yarn-session.sh啟動的flink使用的logback配置文件是logback-yarn.xml

doc

  • Using Logback instead of Log4j

總結

以上是生活随笔為你收集整理的聊聊flink的logback配置的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。