Spark3.1.1 Docker镜像中修改/etc/hosts
生活随笔
收集整理的這篇文章主要介紹了
Spark3.1.1 Docker镜像中修改/etc/hosts
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
Dockerfile
注釋掉spark_uid
- #ARG spark_uid=185
- #USER ${spark_uid}
entrypoint.sh
[root@localhost spark-3.1.1-bin-hadoop2.7]# cat kubernetes/dockerfiles/spark/entrypoint.sh #!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## echo commands to the terminal output set -ex# Check whether there is a passwd entry for the container UID myuid=$(id -u) mygid=$(id -g) # turn off -e for getent because it will return error code in anonymous uid case set +e uidentry=$(getent passwd $myuid) set -e# If there is no passwd entry for the container UID, attempt to create one if [ -z "$uidentry" ] ; thenif [ -w /etc/passwd ] ; thenecho "$myuid:x:$myuid:$mygid:${SPARK_USER_NAME:-anonymous uid}:$SPARK_HOME:/bin/false" >> /etc/passwdelseecho "Container ENTRYPOINT failed to add passwd entry for anonymous UID"fi fiSPARK_CLASSPATH="$SPARK_CLASSPATH:${SPARK_HOME}/jars/*" env | grep SPARK_JAVA_OPT_ | sort -t_ -k4 -n | sed 's/[^=]*=\(.*\)/\1/g' > /tmp/java_opts.txt readarray -t SPARK_EXECUTOR_JAVA_OPTS < /tmp/java_opts.txtif [ -n "$SPARK_EXTRA_CLASSPATH" ]; thenSPARK_CLASSPATH="$SPARK_CLASSPATH:$SPARK_EXTRA_CLASSPATH" fiif ! [ -z ${PYSPARK_PYTHON+x} ]; thenexport PYSPARK_PYTHON fi if ! [ -z ${PYSPARK_DRIVER_PYTHON+x} ]; thenexport PYSPARK_DRIVER_PYTHON fi# If HADOOP_HOME is set and SPARK_DIST_CLASSPATH is not set, set it here so Hadoop jars are available to the executor. # It does not set SPARK_DIST_CLASSPATH if already set, to avoid overriding customizations of this value from elsewhere e.g. Docker/K8s. if [ -n "${HADOOP_HOME}" ] && [ -z "${SPARK_DIST_CLASSPATH}" ]; thenexport SPARK_DIST_CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath)" fiif ! [ -z ${HADOOP_CONF_DIR+x} ]; thenSPARK_CLASSPATH="$HADOOP_CONF_DIR:$SPARK_CLASSPATH"; fiif ! [ -z ${SPARK_CONF_DIR+x} ]; thenSPARK_CLASSPATH="$SPARK_CONF_DIR:$SPARK_CLASSPATH"; elif ! [ -z ${SPARK_HOME+x} ]; thenSPARK_CLASSPATH="$SPARK_HOME/conf:$SPARK_CLASSPATH"; ficase "$1" indriver)shift 1CMD=("$SPARK_HOME/bin/spark-submit"--conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS"--deploy-mode client"$@");;executor)shift 1CMD=(${JAVA_HOME}/bin/java"${SPARK_EXECUTOR_JAVA_OPTS[@]}"-Xms$SPARK_EXECUTOR_MEMORY-Xmx$SPARK_EXECUTOR_MEMORY-cp "$SPARK_CLASSPATH:$SPARK_DIST_CLASSPATH"org.apache.spark.executor.CoarseGrainedExecutorBackend--driver-url $SPARK_DRIVER_URL--executor-id $SPARK_EXECUTOR_ID--cores $SPARK_EXECUTOR_CORES--app-id $SPARK_APPLICATION_ID--hostname $SPARK_EXECUTOR_POD_IP--resourceProfileId $SPARK_RESOURCE_PROFILE_ID);;*)echo "Non-spark-on-k8s command provided, proceeding in pass-through mode..."CMD=("$@");; esac# modify hosts cat /tmp/hosts >> /etc/hosts# Execute the container CMD under tini for better hygiene exec /usr/bin/tini -s -- "${CMD[@]}" [root@localhost spark-3.1.1-bin-hadoop2.7]#總結
以上是生活随笔為你收集整理的Spark3.1.1 Docker镜像中修改/etc/hosts的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: K8S常见错误、原因及处理方法
- 下一篇: docker run指定entrypio