日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Spark3.1.1 Docker镜像中修改/etc/hosts

發布時間:2025/1/21 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Spark3.1.1 Docker镜像中修改/etc/hosts 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Dockerfile

注釋掉spark_uid

  • #ARG spark_uid=185
  • #USER ${spark_uid}
[root@localhost spark-3.1.1-bin-hadoop2.7]# cat kubernetes/dockerfiles/spark/Dockerfile # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ARG java_image_tag=11-jre-slimFROM openjdk:${java_image_tag}#ARG spark_uid=185# Before building the docker image, first build and make a Spark distribution following # the instructions in http://spark.apache.org/docs/latest/building-spark.html. # If this docker file is being used in the context of building your images from a Spark # distribution, the docker build command should be invoked from the top level directory # of the Spark distribution. E.g.: # docker build -t spark:latest -f kubernetes/dockerfiles/spark/Dockerfile .RUN set -ex && \sed -i 's/http:\/\/deb.\(.*\)/https:\/\/deb.\1/g' /etc/apt/sources.list && \apt-get update && \ln -s /lib /lib64 && \apt install -y bash tini libc6 libpam-modules krb5-user libnss3 procps sudo && \mkdir -p /opt/spark && \mkdir -p /opt/spark/examples && \mkdir -p /opt/spark/work-dir && \touch /opt/spark/RELEASE && \rm /bin/sh && \ln -sv /bin/bash /bin/sh && \echo "auth required pam_wheel.so use_uid" >> /etc/pam.d/su && \chgrp root /etc/passwd && chmod ug+rw /etc/passwd && \rm -rf /var/cache/apt/*COPY jars /opt/spark/jars COPY bin /opt/spark/bin COPY sbin /opt/spark/sbin COPY kubernetes/dockerfiles/spark/entrypoint.sh /opt/ COPY kubernetes/dockerfiles/spark/decom.sh /opt/ COPY examples /opt/spark/examples COPY kubernetes/tests /opt/spark/tests COPY data /opt/spark/dataCOPY jdk1.8.0_201-amd64 /usr/java/default COPY hadoop-2.9.2 /opt/bigdata/hadoop-2.9.2/ #COPY hbase-2.2.6 /opt/bigdata/hbase-2.2.6 ENV JAVA_HOME=/usr/java/default ENV HADOOP_HOME /opt/bigdata/hadoop-2.9.2/ ENV SPARK_HOME /opt/spark #ENV HBASE_HOME /opt/bigdata/hbase-2.2.6 #ENV SPARK_EXTRA_CLASSPATH hdfs://dmgeo/spark/dmgeo-geospark/dmgeo-geospark-tiler/external_jars ADD hosts /tmp #it does not work #CMD cat /tmp/hosts >> /etc/hostsWORKDIR /opt/spark/work-dir RUN chmod g+w /opt/spark/work-dir RUN chmod a+x /opt/decom.shENTRYPOINT ["/opt/entrypoint.sh" ]# Specify the User that the actual main process will run as #USER ${spark_uid} [root@localhost spark-3.1.1-bin-hadoop2.7]#

entrypoint.sh

[root@localhost spark-3.1.1-bin-hadoop2.7]# cat kubernetes/dockerfiles/spark/entrypoint.sh #!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## echo commands to the terminal output set -ex# Check whether there is a passwd entry for the container UID myuid=$(id -u) mygid=$(id -g) # turn off -e for getent because it will return error code in anonymous uid case set +e uidentry=$(getent passwd $myuid) set -e# If there is no passwd entry for the container UID, attempt to create one if [ -z "$uidentry" ] ; thenif [ -w /etc/passwd ] ; thenecho "$myuid:x:$myuid:$mygid:${SPARK_USER_NAME:-anonymous uid}:$SPARK_HOME:/bin/false" >> /etc/passwdelseecho "Container ENTRYPOINT failed to add passwd entry for anonymous UID"fi fiSPARK_CLASSPATH="$SPARK_CLASSPATH:${SPARK_HOME}/jars/*" env | grep SPARK_JAVA_OPT_ | sort -t_ -k4 -n | sed 's/[^=]*=\(.*\)/\1/g' > /tmp/java_opts.txt readarray -t SPARK_EXECUTOR_JAVA_OPTS < /tmp/java_opts.txtif [ -n "$SPARK_EXTRA_CLASSPATH" ]; thenSPARK_CLASSPATH="$SPARK_CLASSPATH:$SPARK_EXTRA_CLASSPATH" fiif ! [ -z ${PYSPARK_PYTHON+x} ]; thenexport PYSPARK_PYTHON fi if ! [ -z ${PYSPARK_DRIVER_PYTHON+x} ]; thenexport PYSPARK_DRIVER_PYTHON fi# If HADOOP_HOME is set and SPARK_DIST_CLASSPATH is not set, set it here so Hadoop jars are available to the executor. # It does not set SPARK_DIST_CLASSPATH if already set, to avoid overriding customizations of this value from elsewhere e.g. Docker/K8s. if [ -n "${HADOOP_HOME}" ] && [ -z "${SPARK_DIST_CLASSPATH}" ]; thenexport SPARK_DIST_CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath)" fiif ! [ -z ${HADOOP_CONF_DIR+x} ]; thenSPARK_CLASSPATH="$HADOOP_CONF_DIR:$SPARK_CLASSPATH"; fiif ! [ -z ${SPARK_CONF_DIR+x} ]; thenSPARK_CLASSPATH="$SPARK_CONF_DIR:$SPARK_CLASSPATH"; elif ! [ -z ${SPARK_HOME+x} ]; thenSPARK_CLASSPATH="$SPARK_HOME/conf:$SPARK_CLASSPATH"; ficase "$1" indriver)shift 1CMD=("$SPARK_HOME/bin/spark-submit"--conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS"--deploy-mode client"$@");;executor)shift 1CMD=(${JAVA_HOME}/bin/java"${SPARK_EXECUTOR_JAVA_OPTS[@]}"-Xms$SPARK_EXECUTOR_MEMORY-Xmx$SPARK_EXECUTOR_MEMORY-cp "$SPARK_CLASSPATH:$SPARK_DIST_CLASSPATH"org.apache.spark.executor.CoarseGrainedExecutorBackend--driver-url $SPARK_DRIVER_URL--executor-id $SPARK_EXECUTOR_ID--cores $SPARK_EXECUTOR_CORES--app-id $SPARK_APPLICATION_ID--hostname $SPARK_EXECUTOR_POD_IP--resourceProfileId $SPARK_RESOURCE_PROFILE_ID);;*)echo "Non-spark-on-k8s command provided, proceeding in pass-through mode..."CMD=("$@");; esac# modify hosts cat /tmp/hosts >> /etc/hosts# Execute the container CMD under tini for better hygiene exec /usr/bin/tini -s -- "${CMD[@]}" [root@localhost spark-3.1.1-bin-hadoop2.7]#

總結

以上是生活随笔為你收集整理的Spark3.1.1 Docker镜像中修改/etc/hosts的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。