日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

docker in all

發布時間:2025/4/5 编程问答 49 豆豆
生活随笔 收集整理的這篇文章主要介紹了 docker in all 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

docker vs hyper-v,vmware,xen,kvm

docker host, docker container, docker engineen, docker image

images = stopped container

container = running images

?

docker操作示意圖

workflow

?

開始使用docker(以windows下為例)

PS G:\dockerdata> docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535 Status: Downloaded newer image for hello-world:latestHello from Docker! This message shows that your installation appears to be working correctly.To generate this message, Docker took the following steps:1. The Docker client contacted the Docker daemon.2. The Docker daemon pulled the "hello-world" image from the Docker Hub.(amd64)3. The Docker daemon created a new container from that image which runs theexecutable that produces the output you are currently reading.4. The Docker daemon streamed that output to the Docker client, which sent itto your terminal.To try something more ambitious, you can run an Ubuntu container with:$ docker run -it ubuntu bashShare images, automate workflows, and more with a free Docker ID:https://hub.docker.com/ For more examples and ideas, visit:https://docs.docker.com/get-started/

PS G:\dockerdata> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 2 months ago 1.84kB
docker4w/nsenter-dockerd latest 2f1c802f322f 4 months ago 187kB

?

?

以上docker run hello-world命令本質上的執行過程:

1. docker client向docker daemon(engine)聯絡,告訴docker engine,請幫我運行一個hello-wold container

2. docker daemon(engine)收到該命令后先在本地查找是否有hello-world這個image,如果沒有則從regisry查找并且pull下來

3. docker daemon以該image實例化一個container,并且運行該image定義的executable,而這個executable將產生output;

4. docker daemon streamed that output to the docker client,這樣我們就看到了hello world的消息

docker image到底包含了什么?

強烈建議:?https://www.csdn.net/article/2015-08-21/2825511

我們知道linux系統由內核+發行版組成,同樣的內核比如3.8之上,我們可以有debian, ubuntu, centos等不同的發行版本。類似地,Docker鏡像就是類似于“ubuntu操作系統發行版”,可 以在任何滿足要求的Linux內核之上運行。簡單一點有“Debian操作系統發行版”Docker鏡像、“Ubuntu操作系統發行版”Docker鏡 像;如果在Debian鏡像中安裝MySQL 5.6,那我們可以將其命名為Mysql:5.6鏡像;如果在Debian鏡像中安裝有Golang 1.3,那我們可以將其命名為golang:1.3鏡像;以此類推,大家可以根據自己安裝的軟件,得到任何自己想要的鏡像。

修改默認pull image存放位置

在windows下本質上docker engine是工作在hyper-v虛擬機中,所有的docker客戶端敲的命令在該虛擬機中運行,pull的image也放在該虛擬機中,因此我們要修改image保存的位置實際上只要修改hyper-v的MobyLinuxVM對應的vhdx文件的位置即可。

http://www.cnblogs.com/show668/p/5341283.html

?docker ps/docker images

PS G:\dockerdata> docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest fce289e99eb9 2 months ago 1.84kB docker4w/nsenter-dockerd latest 2f1c802f322f 4 months ago 187kB PS G:\dockerdata> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES PS G:\dockerdata> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 135da1372a06 hello-world "/hello" 24 minutes ago Exited (0) 24 minutes ago modest_spence

?pull特定版本的image

docker pull ubuntu:14.04

Repository:是對一個docker image的存儲定義

將docker hub mirror配置為阿里云加速器

刪除本地的image

PS G:\dockerdata> docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 47b19964fb50 3 weeks ago 88.1MB alpine latest caf27325b298 4 weeks ago 5.53MB hello-world latest fce289e99eb9 2 months ago 1.84kB docker4w/nsenter-dockerd latest 2f1c802f322f 4 months ago 187kB PS G:\dockerdata> docker rmi ubuntu Untagged: ubuntu:latest Untagged: ubuntu@sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5b35a24b41a86077c650210 Deleted: sha256:47b19964fb500f3158ae57f20d16d8784cc4af37c52c49d3b4f5bc5eede49541 Deleted: sha256:d4c69838355b876cd3eb0d92b4ef27b1839f5b094a4eb1ad2a1d747dd5d6088f Deleted: sha256:1c29a32189d8f2738d0d99378dc0912c9f9d289b52fb698bdd6c1c8cd7a33727 Deleted: sha256:d801a12f6af7beff367268f99607376584d8b2da656dcd8656973b7ad9779ab4 Deleted: sha256:bebe7ce6215aee349bee5d67222abeb5c5a834bbeaa2f2f5d05363d9fd68db41

docker run detached mode啟動一個web服務

PS G:\dockerdata> docker run -d --name web -p 9090:8080 nigelpoulton/pluralsight-docker-ci Unable to find image 'nigelpoulton/pluralsight-docker-ci:latest' locally latest: Pulling from nigelpoulton/pluralsight-docker-ci a3ed95caeb02: Pull complete 3b231ed5aa2f: Pull complete 7e4f9cd54d46: Pull complete 929432235e51: Pull complete 6899ef41c594: Pull complete 0b38fccd0dab: Pull complete Digest: sha256:7a6b0125fe7893e70dc63b2c42ad779e5866c6d2779ceb9b12a28e2c38bd8d3d Status: Downloaded newer image for nigelpoulton/pluralsight-docker-ci:latest 27b4bc07a3e299e738ea8fc05bb6de9fa160c192a5ab71886b84e432d5422aea #這就是docker host主機上面的container id PS G:\dockerdata> docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27b4bc07a3e2 nigelpoulton/pluralsight-docker-ci "/bin/sh -c 'cd /src…" 4 minutes ago Up 4 minutes 0.0.0.0:9090->8080/tcp web

上面的命令執行后將在docker host主機上啟動一個web服務器,使用http://localhost:9090就可以直接訪問到該container的服務了!!

啟動一個container并且在該container中執行bash

PS G:\dockerdata> docker run -it --name temp ubuntu:latest /bin/bash Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 6cf436f81810: Pull complete 987088a85b96: Pull complete b4624b3efe06: Pull complete d42beb8ded59: Pull complete Digest: sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5b35a24b41a86077c650210 Status: Downloaded newer image for ubuntu:latest root@9b4970dcb02a:/# ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var

簡單批量維護命令:

PS G:\dockerdata> docker ps -aq 9b4970dcb02a 27b4bc07a3e2 135da1372a06 PS G:\dockerdata> docker stop $(docker ps -aq) 9b4970dcb02a 27b4bc07a3e2 135da1372a06

?

swarm:

一群docker engines加入一個cluster分組就被稱為swarm, a cluster = a swarm

swarm里面的engine工作于swarm mode

manager nodes維護swarm,

worker nodes 執行manager nodes分發過來的tasks

services: declarative/scalable

tasks: assigned to worker nodes ,means? ~ containers? currently

docker swarm init --advertise-addr xxx:2377 --listen-addr xxx:2377 # engine port 2375, secure engine port: 2376, swarm port: 2377

docker service create --name web-fe --replicas 5 ...

?

Container

container is isolated area of an OS with resource usage limits applied.

它由name space和control group(限定cpu,ram,networking吞吐量,io吞吐量)約束形成的獨立運行環境。

engine?

engine通過外部api接受命令負責屏蔽OS的namespace及cgroup,并且創建對應的container運行于host環境中

不同module協同工作實現的container運行過程

一旦container被啟動運行后,containerd和它之間就可以沒有了關系,以后可以通過發現過程來取得新的聯系

image

image包含app運行所需的

1.OS Files library, objects;

2. app files

3. manifest-->定義這些文件是如何組織在一起工作的

image是層疊結構的文件系統.

docker image pull redis的工作分兩步:第一步從registry這里獲取到manifest文件;第二步pull layers

?

docker history redis # 羅列出所有能夠創建redis這個image的命令列表 $ docker image inspect redis [{"Id": "sha256:0f55cf3661e92cc44014f9d93e6f7cbd2a59b7220a26edcdb0828289cf6a361f","RepoTags": ["redis:latest"],"RepoDigests": ["redis@sha256:dd5b84ce536dffdcab79024f4df5485d010affa09e6c399b215e199a0dca38c4"],"Parent": "","Comment": "","Created": "2019-02-06T09:02:43.375297494Z","Container": "1abd8103d4a4423fa8339aabdb3442026bf6b8e9dca21c4ed44973e73ffd90cf","ContainerConfig": {"Hostname": "1abd8103d4a4","Domainname": "","User": "","AttachStdin": false,"AttachStdout": false,"AttachStderr": false,"ExposedPorts": {"6379/tcp": {}},"Tty": false,"OpenStdin": false,"StdinOnce": false,"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","GOSU_VERSION=1.10","REDIS_VERSION=5.0.3","REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.3.tar.gz","REDIS_DOWNLOAD_SHA=e290b4ddf817b26254a74d5d564095b11f9cd20d8f165459efa53eb63cd93e02"],"Cmd": ["/bin/sh","-c","#(nop) ","CMD [\"redis-server\"]"],"ArgsEscaped": true,"Image": "sha256:68d73e8c5e2090bf28a588569b92595ab2d60e38eb92ba968be552b496eb6ed3","Volumes": {"/data": {}},"WorkingDir": "/data","Entrypoint": ["docker-entrypoint.sh"],"OnBuild": null,"Labels": {}},"DockerVersion": "18.06.1-ce","Author": "","Config": {"Hostname": "","Domainname": "","User": "","AttachStdin": false,"AttachStdout": false,"AttachStderr": false,"ExposedPorts": {"6379/tcp": {}},"Tty": false,"OpenStdin": false,"StdinOnce": false,"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","GOSU_VERSION=1.10","REDIS_VERSION=5.0.3","REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.3.tar.gz","REDIS_DOWNLOAD_SHA=e290b4ddf817b26254a74d5d564095b11f9cd20d8f165459efa53eb63cd93e02"],"Cmd": ["redis-server"],"ArgsEscaped": true,"Image": "sha256:68d73e8c5e2090bf28a588569b92595ab2d60e38eb92ba968be552b496eb6ed3","Volumes": {"/data": {}},"WorkingDir": "/data","Entrypoint": ["docker-entrypoint.sh"],"OnBuild": null,"Labels": null},"Architecture": "amd64","Os": "linux","Size": 94993858,"VirtualSize": 94993858,"GraphDriver": {"Data": {"LowerDir": "/var/lib/docker/overlay2/1aeb385f6b9def8e0c2048213c6a68446b233f4d44c9230657859257505dace5/diff:/var/lib/docker/overlay2/5e8dc35e2ed45cee79a8b5108cc74bfe7000311e75db45bd83d254f21e1892e7/diff:/var/lib/docker/overlay2/bfb61b0335946076ea36f25716da9e43d133dd6e8cf0211e7abadb6a23c001f3/diff:/var/lib/docker/overlay2/591b4074f127d18d3b7d84078891e464eb9c808439bd70f78f653ece9fa1101e/diff:/var/lib/docker/overlay2/30c283b2c4910e51dc162b23d6344575697e9fb478aeccf330edcef05c90aeae/diff","MergedDir": "/var/lib/docker/overlay2/358068125c47e5995e7b1308b71a7ba11dd1509a9a69b36c1495e5c23a5c71f0/merged","UpperDir": "/var/lib/docker/overlay2/358068125c47e5995e7b1308b71a7ba11dd1509a9a69b36c1495e5c23a5c71f0/diff","WorkDir": "/var/lib/docker/overlay2/358068125c47e5995e7b1308b71a7ba11dd1509a9a69b36c1495e5c23a5c71f0/work"},"Name": "overlay2"},"RootFS": {"Type": "layers","Layers": ["sha256:0a07e81f5da36e4cd6c89d9bc3af643345e56bb2ed74cc8772e42ec0d393aee3","sha256:943fb767d8100f2c44a54abbdde4bf2c0f6340da71125f4ef73ad2db7007841d","sha256:16d37f04beb4896e44557df69c060fc93e1486391c4c3dedf3c6ebd773098d90","sha256:5e1afad325f9c970c66dcc5db47d19f034691f29492bf2fe83b7fec680a9d122","sha256:d98df0140af1ee738e8987862268e80074503ab33212f6ebe253195b0f461a43","sha256:b437bb5668d3cd5424015d7b7aefc99332c4af3530b17367e6d9d067ce9bb6d5"]},"Metadata": {"LastTagTime": "0001-01-01T00:00:00Z"}} ]

docker支持的網絡模式

bridge模式: -net = bridge

這是默認網絡,docker engine一旦啟動后就會在宿主host上創建一個docker0的網橋(可以理解為switch),默認創建的容器都是添加到該網橋(switch)的網段中,可以想象這些容器就是連接在一個交換機的不同網口上,他們的網關就是docker0的ip(172.17.0.1)

host模式: -net = host

容器不會獲得獨立的network namespace,而是與宿主host主機共用一個,這也意味著container不會擁有自己的網卡信息,而是使用宿主機的。host模式的容器之間除了網絡,其他都是隔離的。

none模式: -net = none

容器將獲取獨立的network namespace,但是不會為容器進行任何網絡配置,需要我們自己去手工配置

container模式: -net = container:Name/ID

這種模式創建的容器將與指定的容器使用同一個network namespace,具有同樣的網絡配置信息,這種容器之間除了網絡,其他都是隔離的。

自定義網絡模式:

與默認的bridge原理一樣,但自定義網絡內部具備dns發現的能力,可以通過容器名或者主機名容器之間網絡通信

docker logs通過查看容器log來定位調試問題

默認情況下docker logs和ldocker service logs命令顯示命令執行的輸出,就像是你在命令行直接執行該程序時的情形一樣。unix和linux程序往往會打開三個I/O Streams,分別稱為STDIN,STDOUT,STDERR。其中stdin是命令的input stream, 可以包含從鍵盤獲得的input或者從其他命令的輸出作為input;    stdout是應用程序的normal output.而stderr則被用于錯誤信息輸出。默認情況下,docker logs將顯示命令的stdout和stderr輸出。基于以上信息,在多重場景下docker logs無法提供有效的log:

1. 如果你使用了一個logging driver(logging driver是docker提供的從運行的container或者service中獲取有用信息的機制)將log發往一個文件,或者一個外部的主機,一個數據庫或者其他的logging back-end,那么docker logs將不會顯示任何有用的信息;

https://docs.docker.com/config/containers/logging/configure/

docker daemon有一個默認的logging driver,每個啟動的容器都將使用它除非你配置了使用一個不同的logging driver.

比如,我們可以配置docker daemon使用syslog來做log的收集,他就會通過syslog將運行容器的stdout,stderr信息實時打印到遠程服務器。在這種情況下,我們實際上就不可能使用docker logs來查看運行時的狀態,而只能通過syslog服務器來獲取信息;

2. 如果我們的image運行在non-interactive 進程中,比如web server或者database的進程,這種進程會將其輸出信息直接送往log文件,而不是stdout或者stderr.

在這種情況下,我們一方面可以進入容器來查看類似nginx和myql的log文件獲取運行時信息;另外一方面官方的nginx,httpd都提供了workaround方式,比如nginx image的構建中通過創建一個符號連接將 /var/log/nginx/access.log指向到/dev/stdout; 將/var/log/nginx/error.log指向到/dev/stderr的方式來解決。 httpd image則默認輸出到/proc/self/fd/1 (stdout),?  error則將寫往 /proc/self/fd/2(stderr)

這樣我們依然可以通過docker logs -tail 8 -f來實時查看log

docker networking

https://success.docker.com/article/networking

建議使用自定義網絡,docker默認的docker0 bridge支持--link參數,但是--link參數將來也會廢棄。

$ brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242bd712cd8 no br-9694b511a9af 8000.0242e7c72a3d no br-81195db0babc 8000.0242d6feb257 no veth375600fvethbc86c59 br-c301fa0c30d5 8000.024241d93a8e no veth73040a3veth72eebcevethd5af9cdveth12d8ab4veth6d89a9d

咱們來看一下使用laradock docker-compose up -d nginx mysql之后的網絡拓補圖分析過程:

$ brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242bd712cd8 no br-9694b511a9af 8000.0242e7c72a3d no br-81195db0babc 8000.0242d6feb257 no veth375600fvethbc86c59 br-c301fa0c30d5 8000.024241d93a8e no veth73040a3veth72eebcevethd5af9cdveth12d8ab4veth6d89a9d [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 25dd9253f860 laradock_nginx "/bin/bash /opt/star…" 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp laradock_nginx_1 a2070a01035c laradock_php-fpm "docker-php-entrypoi…" 2 hours ago Up 2 hours 9000/tcp laradock_php-fpm_1 d1f9327cb61c laradock_workspace "/sbin/my_init" 2 hours ago Up 2 hours 0.0.0.0:2222->22/tcp laradock_workspace_1 a70f2b180a0d laradock_mysql "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:3306->3306/tcp, 33060/tcp laradock_mysql_1 01f438a6efa9 docker:dind "dockerd-entrypoint.…" 2 hours ago Up 2 hours 2375/tcp laradock_docker-in-docker_1 [node1] (local) root@192.168.0.13 ~/apiato/laradock $ [node1] (local) root@192.168.0.13 ~/apiato/laradock $ [node1] (local) root@192.168.0.13 ~/apiato/laradock $ [node1] (local) root@192.168.0.13 ~/apiato/laradock $ [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker network ls NETWORK ID NAME DRIVER SCOPE 60e8d0d3dd8c bridge bridge local 5130e0e1e134 host host local c301fa0c30d5 laradock_backend bridge local 9694b511a9af laradock_default bridge local 81195db0babc laradock_frontend bridge local cb098f68c7be none null local [node1] (local) root@192.168.0.13 ~/apiato/laradock $ brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242bd712cd8 no br-9694b511a9af 8000.0242e7c72a3d no br-81195db0babc 8000.0242d6feb257 no veth375600fvethbc86c59 br-c301fa0c30d5 8000.024241d93a8e no veth73040a3veth72eebcevethd5af9cdveth12d8ab4veth6d89a9d [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker network inspect c301 [{"Name": "laradock_backend","Id": "c301fa0c30d5f44e8daab0ffecf8166012f63edee764ce2abeaf3e884ce54446","Created": "2019-03-13T12:25:42.645372888Z","Scope": "local","Driver": "bridge","EnableIPv6": false,"IPAM": {"Driver": "default","Options": null,"Config": [{"Subnet": "172.21.0.0/16","Gateway": "172.21.0.1"}]},"Internal": false,"Attachable": true,"Ingress": false,"ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {"01f438a6efa996b4e5c8df8f36b742ae468bf09762a1e6eabdefd66f5c920e11": {"Name": "laradock_docker-in-docker_1","EndpointID": "d01c244fc579cd288bf8b1e79a6e936486b348f3167db3e7034044e08beae44c","MacAddress": "02:42:ac:15:00:02","IPv4Address": "172.21.0.2/16","IPv6Address": ""},"25dd9253f860588321b1ff05ae4b43226ae6c22f83044973b86c0c57871ed924": {"Name": "laradock_nginx_1","EndpointID": "24b527973345960c10bf2f97a11612c33562a5146732e9c4049625fc99cadca8","MacAddress": "02:42:ac:15:00:06","IPv4Address": "172.21.0.6/16","IPv6Address": ""},"a2070a01035cbd8c15005c074e9e19ea18f795cdf6a2bc48863d86cc638b35b5": {"Name": "laradock_php-fpm_1","EndpointID": "b3071a2d3d019a6e10b0b778ce0b4f99efbaff28898d295d3829d41e840aa15c","MacAddress": "02:42:ac:15:00:05","IPv4Address": "172.21.0.5/16","IPv6Address": ""},"a70f2b180a0dfcc18c26e4991897946b9389b678ce4ea2cd6527859c301bb78e": {"Name": "laradock_mysql_1","EndpointID": "815e801431b16f4a245b0a243e08cc9642482b3933b09480928ae40fadd56b14","MacAddress": "02:42:ac:15:00:03","IPv4Address": "172.21.0.3/16","IPv6Address": ""},"d1f9327cb61cbd26f43c55911cbffa1cd3f53b912f783725bbf73e0c6edad5ef": {"Name": "laradock_workspace_1","EndpointID": "5bbe5ceae7d15ff3eb65236ab0243619591d69474f3a0a13df07e507d2e25a22","MacAddress": "02:42:ac:15:00:04","IPv4Address": "172.21.0.4/16","IPv6Address": ""}},"Options": {},"Labels": {"com.docker.compose.network": "backend","com.docker.compose.project": "laradock","com.docker.compose.version": "1.23.2"}} ] [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker network inspect 8119 [{"Name": "laradock_frontend","Id": "81195db0babc4aff1b4ae09b2ad078038b74643c798b396409a46f2948ff89c8","Created": "2019-03-13T12:25:42.057604176Z","Scope": "local","Driver": "bridge","EnableIPv6": false,"IPAM": {"Driver": "default","Options": null,"Config": [{"Subnet": "172.20.0.0/16","Gateway": "172.20.0.1"}]},"Internal": false,"Attachable": true,"Ingress": false,"ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {"25dd9253f860588321b1ff05ae4b43226ae6c22f83044973b86c0c57871ed924": {"Name": "laradock_nginx_1","EndpointID": "e1ad08b19608cc3884a9da04e509a71566ca4847245db12310d77463bcb80814","MacAddress": "02:42:ac:14:00:03","IPv4Address": "172.20.0.3/16","IPv6Address": ""},"d1f9327cb61cbd26f43c55911cbffa1cd3f53b912f783725bbf73e0c6edad5ef": {"Name": "laradock_workspace_1","EndpointID": "64d65215f6e0d6135bb7dbf5f341bd858972bc8e869cd8a177991d27d5652491","MacAddress": "02:42:ac:14:00:02","IPv4Address": "172.20.0.2/16","IPv6Address": ""}},"Options": {},"Labels": {"com.docker.compose.network": "frontend","com.docker.compose.project": "laradock","com.docker.compose.version": "1.23.2"}} ] [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker network inspect 9694 [{"Name": "laradock_default","Id": "9694b511a9afac9a43d3b45ae4296976bf193633148465141f5e0cd787b12082","Created": "2019-03-13T12:25:41.924774946Z","Scope": "local","Driver": "bridge","EnableIPv6": false,"IPAM": {"Driver": "default","Options": null,"Config": [{"Subnet": "172.19.0.0/16","Gateway": "172.19.0.1"}]},"Internal": false,"Attachable": true,"Ingress": false,"ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {},"Options": {},"Labels": {"com.docker.compose.network": "default","com.docker.compose.project": "laradock","com.docker.compose.version": "1.23.2"}} ] [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker network inspect 5130 [{"Name": "host","Id": "5130e0e1e1340fb58d5704528257cfb0f7dc98e9f718055c3e32f96705355597","Created": "2019-03-13T12:23:30.472608001Z","Scope": "local","Driver": "host","EnableIPv6": false,"IPAM": {"Driver": "default","Options": null,"Config": []},"Internal": false,"Attachable": false,"Ingress": false,"ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {},"Options": {},"Labels": {}} ] [node1] (local) root@192.168.0.13 ~/apiato/laradock $ docker network inspect 60e8 [{"Name": "bridge","Id": "60e8d0d3dd8c376a31a802f9965227301dc06a74910852895f9b010d07fd4417","Created": "2019-03-13T12:23:30.540268336Z","Scope": "local","Driver": "bridge","EnableIPv6": false,"IPAM": {"Driver": "default","Options": null,"Config": [{"Subnet": "172.17.0.0/16"}]},"Internal": false,"Attachable": false,"Ingress": false,"ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {},"Options": {"com.docker.network.bridge.default_bridge": "true","com.docker.network.bridge.enable_icc": "true","com.docker.network.bridge.enable_ip_masquerade": "true","com.docker.network.bridge.host_binding_ipv4": "0.0.0.0","com.docker.network.bridge.name": "docker0","com.docker.network.driver.mtu": "1500"},"Labels": {}} ]

關于環境變量env

https://vsupalov.com/docker-arg-env-variable-guide/

關于volumes

https://docs.docker.com/storage/volumes/

如果我們不需要永久持久化,但是又需要在運行時保存一些狀態信息,可以考慮使用tmpfs mount直接mount到內存中,加快速度。

?

容器中進程啟動的兩種模式:shell模式和exec模式

docker容器內啟動的所有進程全部都是宿主機上的獨立進程;另外,該進程是不是docker容器進程本身(即:1號進程)取決于dockerfile的寫法。

在ENTRYPOINT和CMD命令中,有兩種不同的進程執行方式:shell和exec.

1.在shell方式中,CMD/ENTRYPOINT指令如下方式定義

CMD executable param1 param2

此時PID=1的進程為/bin/sh -c "executable param1 param2",真正的executable工作進程是其子進程

2.在exec方式中,CMD/ENTRYPOINT指令則如下方式定義:

CMD ["executable", "param1","param2"]

此時PID=1的進程直接是工作進程executable param1 param2

這兩種啟動模式還帶來進程退出機制的區別,如果使用不當會造成僵尸進程。

docker提供了docker stop和docker kill兩個命令向1號進程發送信號。當執行docker stop時,docker會先想PID1的進程發送一個SIGTERM信號,如果容器收到該信號后沒有結束進程,則docker daemon會在等待10秒后發送SIGKILL信號,將容器進程殺死(PID1)并變為退出狀態。

PID1的進程必須能夠正確處理SIGTERM信號并通知所有子進程退出。如果用shell腳本啟動容器,其1號進程為shell進程,而shell進程中并沒有對SIGTERM信號的處理邏輯,因此會忽略接收到的SIGTERM信號,這樣就無法實現優雅的退出(比如持久化數據),因此docker官方建議的模式是:令每個容器中只包含一個進程,同時采用exec模式啟動進程?;蛘呤褂枚ㄖ苹痵hell腳本啟動,需要能夠接受SIGTERM信號并且分發該信號到所有的子進程,或者工作進程以exec方式啟動,同時該工作進程能夠處理SIGTERM并負責分發給子進程。

docker daemon只監控1號進程。

Docker容器的運行時模型

linux中的父進程用fork命令創建子進程,然后調用exec執行子進程函數,每個進程都有一個PID。另外,除了常見的一般應用進程,操作系統中還有以下特殊的進程。

1. PID=0是調度進程,該進程是內核的一部分,不會執行磁盤上的任何程序;

2. PID=1為init進程,通常讀取與系統有關的初始化文件/etc/rc*文件,/etc/inittab,/etc/init.d/中的文件

3. PID=2為頁守護進程,負責支持虛擬存儲系統的分頁操作。

Docker啟動時,利用fork命令從Docker-containerd進程中fork出一個子進程,然后以exec方式啟動自己的程序。容器進程被fork之后便創建了namespace,下面就要執行一系列的初始化操作,該操作分為三個階段,dockerinit負責初始化網絡棧;ENTRYPOINT負責完成用戶態配置;CMD負責啟動入口。啟動后的docker容器和docker daemon就通過sock文件描述符實現IPC通信。

docker volumes vs binding mount

docker數據持久化建議有兩種或者說3種模式:

1. bind mounts;

2. named volumes

3. volumes in dockerfile

bind mounts的作用是將host的本地目錄mount到container中,

docker run -v /hostdir:/containerdir IMAGE_NAME docker run --mount type=bind,source=/hostdir,target=/containerdir IMAGE_NAME

named volumes是通過docker volume create volume_name的方式手工創建的volumes,他們都存儲在/var/lib/docker/volumes目錄下,可以僅僅使用volume name來引用。比如,如果我們創建了mysql_data這個volume,則可以在docker run -v mysql_data:/containerdata IMAGE_NAME來引用它。

而在dockerfile中定義的volumes,是使用VOLUME指令來創建的,他們也存儲于/var/lib/docker/volumes中,但是他們沒有一個自定義的名字,一般使用hash作為其名稱,并且dockerfile中定義的volumes后續參數實際上是指定了在container中的路徑,如果在image中已經populate了數據,則container執行后會自動將該目錄數據copy到host自動創建的目錄中(如果指定了host路徑則不會覆蓋host的數據!)

https://stackoverflow.com/questions/41935435/understanding-volume-instruction-in-dockerfile

docker from development to production

一般來說,我們在開發時希望通過一個volume來綁定host主機的source代碼以方便即改即調的快捷流程,但是在production階段

我們往往直接將代碼復制到image中從而實現容器就是代碼,獨立于主機可以在任何地點運行的便捷。

一個比較好的策略是

docker-compose.yml中這樣定義:

version: '2' services:app:build: .image: app:1.0.0-testvolumes:- ./host_src:/user/share/nginx/htmlports:- "8080:80"

其中nginx app build時需要使用的Dockerfile可以簡單定義如下:

FROM nginx COPY host_src /usr/share/nginx/html

在nginx app中首先COPY host_src到container對應的目錄中,隨后在dev的compose yml中為方便實時修改代碼和測試則mount了一個volume將host_src也映射到nginx app中相同目錄下;

隨后,在nginx app變為production時,我們可以這樣創建一個docker-compose-production.yml

version: '2' services:app:build: .image: app:1.0.0-productionports:- "80:80"

和dev的yml文件相比,我們僅僅剔除了volume的綁定,而是直接使用COPY到image中的代碼去運行

是否可以修改從parent image中繼承的volume data?

比如,A image的dockerfile如下:

FROM bash RUN mkdir "/data" && echo "FOO" > "/data/test" VOLUME "/data"

我們再定義一個B image,它繼承于A,我們在dockerfile中希望修改A image中的“默認”數據:

FROM A RUN echo "BAR" > "/data/test"

以上測試中B image中的/data/test實際上其值為FOO,并不是BAR

這實際上是Docker本身的特性使然,如何workaround?

1. 直接修改parent docker file,我們從google搜索以下信息

docker <image-name:version> source

我們就能夠找到對應的父親image的dockerfile,通過刪除其volume來實現。

VOLUMES本身并不是IMAGE的一部分,因此我們需要通過seed data來實現上面的需求。當docker image被放到另外地方運行時,它將在啟動后是一個空的volume,因此,如果你希望將數據和image一起打包,就不要使用volume,而應該使用copy.

如果你確實需要重新build新的image的話,你應該先將這個volume刪除掉。

https://stackoverflow.com/questions/46227454/modifying-volume-data-inherited-from-parent-image

docker volume create$docker run -v /host_path:container_path$VOLUME in Dockerfile

使用volume是docker推薦的持久化數據的方式,但是volume的用法有很多種,他們之間到底有什么區別?

要回答這個問題先得明白"volume是一個持久化數據的目錄,存在于/var/lib/docker/volumes/..."

這個事實。你可以:

1. 在Dockerfile中聲明一個volume,這意味著每次從image中運行一個container時,該volume就將被created,但是確是(empty)空的,即便你并未使用一個-v參數在docker run -v命令中

2.你可以在運行時指定mount的volume:

docker run -v [host-dir:]container-dir
docker run -d \--name devtest \--mount source=myvol2,target=/app \ nginx:latest # -v和--mount有相同的效果,如果還不存在myvol2則創建一個volume到/var/lib/docker/volumes目錄,隨后mount到container中 docker run -d \--name devtest \ -v myvol2:/app \ nginx:latest

這種模式就結合了VOLUME in dockerfile和docker run -v兩者的優點,他會將host folder mount到由container持久化并存儲于/var/lib/docker/volumes/...的卷

3.docker volume create將創建一個命名式的volume,可以快速被其他容器來mount

https://stackoverflow.com/questions/34809646/what-is-the-purpose-of-volume-in-dockerfile

docker run -d \-it \--name devtest \--mount type=bind,source="$(pwd)"/target,target=/app \nginx:latest # 等價于以下命令,bind mount主機的目錄到target機器上 docker run -d \-it \--name devtest \-v "$(pwd)"/target:/app \nginx:latest

?

dockerfile執行順序及變更

1 FROM ubuntu:16.04 2 RUN apt-get update 3 RUN apt-get install nginx 4 RUN apt-get install php5

如果上面的dockerfile我們做過build,隨后我們想把nginx換成apache,并重新build,則這時候第1和第2行不會再運行,因為都保存在cache中,但是第3和第4行都會重新執行,因為第3行做了變更,而第4行又依賴于第3行,因此第3和第4行都將重新執行最終構建出image

Docker AUFS原理

?

使用docker數據容器的備份策略

我們知道在網站日常運維中會有很多數據產生,包括數據庫本身,很多配置文件,包括dockerfile, docker-compose等數據,如何備份這個數據是一個挑戰。以前直接使用云主機提供商提供的數據卷鏡像備份雖然可以work,但是往往備份了很多不必要的數據,額外占用的空間將產生額外的費用。而目前很多容器服務提供商能夠免費提供私有數據容器存儲,這又可以為我們節省一筆開支。

我的建議思路是:使用busybox基礎鏡像,COPY指令將需要備份的數據copy到鏡像中,并且tag后push到私有倉庫來存儲。

FROM busybox:1.30 COPY ./datainhost /dataincontainer

需要注意的是./datainhost目錄是相對于Dockerfile-databackup這個文件的相對路徑。

如果需要copy不在build context中的目錄到image中,可以這么做:

  • go to you build path
  • mkdir -p some_name
  • sudo mount --bind src_dir ./some_name

然后在dockerfile的copy指令中直接用some_name來引用外部文件夾并且實施copy即可。

?

隨后在host上(包含dockerfile的那個目錄上)執行以下shell命令:

docker build -f Dockerfile-databackup -t registry-internal.cn-shanghai.aliyuncs.com/namespace/reponame:$(date +"%F") .

該命令將會生成registry-internal.../reponame:2019-03-20類似的tag到構建好的image上去。

隨后直接push一下就好了。

注意上述registry對于阿里云主機使用內網ip不占用帶寬,非??焖俸糜?/span>

?

轉載于:https://www.cnblogs.com/kidsitcn/p/10466022.html

總結

以上是生活随笔為你收集整理的docker in all的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。