profile
viewpoint
天马行空 tanjunchen Beijing Senior Developer @cncf, @kubernetes @istio Interested in Go, Kubernetes, Python, Java , ML , BigData

tanjunchen/ParticipateCommunity 16

如何参加 CNCF 等开源社区呢?如何向 Kubernetes 等开源仓库提交贡献呢?这里提供一些信息

tanjunchen/k8s-edge-knowlege 3

Kubernetes 学习旅程 网络 存储 开发 社区

tanjunchen/grpc-test-demo 2

grpc + go + submodule example Java 通过 Grpc 调用 Go 服务简单示例, 通过 submodule 管理共同的 proto 文件, proto 文件可以 import 其他的 proto 文件. k8s watch list 使用 grpc 双向通信的案例。

tanjunchen/Java-Go-Grpc-Demo 1

grpc + java + go + submodule example . Java 通过 Grpc 调用 Go 服务简单示例, 通过 submodule 管理共同的 proto 文件, proto 文件可以 import 其他的 proto 文件.

tanjunchen/k8s-hpa-demo 1

Kubernetes HPA Prometheus Metrics-server 基于 prometheus 的 Kubernetes HPA 的案例

tanjunchen/kubernetes 1

Production-Grade Container Scheduling and Management

tanjunchen/Memcached-Operator 1

k8s CRD Operator Example

tanjunchen/admiral 0

Admiral provides automatic configuration generation, syncing and service discovery for multicluster Istio service mesh

tanjunchen/chinese-independent-developer 0

👩🏿‍💻👨🏾‍💻👩🏼‍💻👨🏽‍💻👩🏻‍💻中国独立开发者项目列表 -- 分享大家都在做什么

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha ae293672df44bdbbf655d15a17fa840ffc0f0635

add java tcp example

view details

push time in a day

push eventtanjunchen/TanjunchenEchoServer

tanjunchen

commit sha 7e6536566e63843e74bd52a25d0db197a9a5ec23

update jar

view details

push time in a day

push eventtanjunchen/TanjunchenEchoClient

tanjunchen

commit sha ef6d93f142563e5c3f758b9d975bf3558c553a57

update jar

view details

push time in a day

push eventtanjunchen/TanjunchenEchoClient

tanjunchen

commit sha b4c4c62a394feb64b5804f5f312a02c3371719c3

x

view details

push time in a day

push eventtanjunchen/TanjunchenEchoClient

tanjunchen

commit sha a3530e810b51aae428d3455e2b3353b7ac79b7fd

fix bug

view details

push time in a day

push eventtanjunchen/TanjunchenEchoServer

tanjunchen

commit sha fc9e7004965c4315e0ab943673ba11ae81c10d03

feat:add java server example

view details

push time in a day

push eventtanjunchen/TanjunchenEchoClient

tanjunchen

commit sha 6bbadf5e47a3dbe6e9f1d0635fab18f579c9518a

feat:add java tcp client exmaple

view details

push time in a day

create barnchtanjunchen/TanjunchenEchoClient

branch : main

created branch time in a day

created repositorytanjunchen/TanjunchenEchoClient

created time in a day

create barnchtanjunchen/TanjunchenEchoServer

branch : main

created branch time in a day

created repositorytanjunchen/TanjunchenEchoServer

created time in a day

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha 4ada2afb8aefdada7371ae2b42ae64f4969b3f92

add java tcp example

view details

push time in a day

delete branch tanjunchen/istio

delete branch : optimize-operator-code-0925

delete time in 2 days

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha a87ec74f31fd23e6651ab9573b04bba80c01ef86

add dockerfile

view details

push time in 2 days

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha ce629be8db7a944148b7d86a1c75ced8834179c6

add sleep image and doc

view details

push time in 2 days

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha 02efa733220c8138ed29adf144170fb1da2ac24f

add yaml and readme.md

view details

push time in 2 days

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha 7ee462580079cc51730a312a81b25efbd927a8a7

add tcp

view details

push time in 3 days

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha 11ac00661f14f0e6c17c9b75ca7ce2819121bf7a

add tcp dockerfile

view details

push time in 3 days

push eventtanjunchen/k8s-edge-knowlege

tanjunchen

commit sha ba47e434265185815506a8c44d864d5e1338a96e

update the istio tcp example

view details

push time in 3 days

startedQihoo360/wayne

started time in 3 days

push eventtanjunchen/Java-Go-Grpc-Demo

tanjunchen

commit sha 5d59e04d33615d606235ae50bd07bc019a8f623f

update proto

view details

tanjunchen

commit sha faaeb2c15e451ba680fc8b87b901afb44b7b3e27

update proto

view details

push time in 4 days

push eventtanjunchen/go-grpc-proto

tanjunchen

commit sha 487be511d512069bb949d9a22a41af07cd3c1297

add proto

view details

push time in 4 days

PullRequestReviewEvent

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-scheduler`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+
+<!-- 
+Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file.
+The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster.
+The embedded client certificate for admin should be in the `system:masters` organization, as defined by default
+[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a
+CN. Kubeadm uses the `kubernetes-admin` CN.
+-->
+另外,一个用于 kubeadm 本身和 admin 的 kubeconfig 文件也被生成并保存到 `/etc/kubernetes/admin.conf` 文件中。
+此处的 admin 定义为正在管理集群并希望完全控制集群(**root**)的实际人员。
+内嵌的 admin 客户端证书应s  `system:masters` 组织的成员,
+这是由默认的 [RBAC 面向用户的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#user-facing-roles)定义的。 
+它还应包括一个 CN。 Kubeadm 使用 `kubernetes-admin` CN。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. `ca.crt` certificate is embedded in all the kubeconfig files.
+2. If a given kubeconfig file exists, and its content is evaluated compliant with the above specs, the existing file will be used and the generation phase for the given kubeconfig skipped
+3. If kubeadm is running in [ExternalCA mode](/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode), all the required kubeconfig must be provided by the user as well, because kubeadm cannot generate any of them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, kubeconfig files are written in a temporary folder
+5. Kubeconfig files generation can be invoked individually with the [`kubeadm init phase kubeconfig all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig) command
+-->
+1. `ca.crt` 证书内嵌在所有 kubeconfig 文件中。
+2. 如果给定的 kubeconfig 文件存在且其内容经过评估符合上述规范,则 kubeadm 将使用现有文件,并跳过给定 kubeconfig 的生成阶段
+3. 如果 kubeadm 以 [ExternalCA 模式](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode)运行,
+   则所有必需的 kubeconfig 也必须由用户提供,因为 kubeadm 不能自己生成
+4. 如果在 `--dry-run` 模式下执行 kubeadm,则 kubeconfig 文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase kubeconfig all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig)
+   命令分别生成 Kubeconfig 文件。
+
+<!-- ### Generate static Pod manifests for control plane components -->
+### 为控制平面组件生成静态 Pod 清单  {#generate-static-pod-manifests-for-control-plane-components}
+
+<!--  
+Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`. The kubelet watches this directory for Pods to create on startup.
+-->
+Kubeadm 将用于控制平面组件的静态 Pod 清单文件写入 `/etc/kubernetes/manifests` 目录。
+Kubelet 启动后会监视这个目录以便创建 Pod。
+
+<!-- Static Pod manifest share a set of common properties: -->
+静态 Pod 清单有一些共同的属性:
+
+<!--  
+- All static Pods are deployed on `kube-system` namespace
+- All static Pods get `tier:control-plane` and `component:{component-name}` labels
+- All static Pods use the `system-node-critical` priority class
+- `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is configured; as a consequence:
+  * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1`
+  * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379`
+- Leader election is enabled for both the controller-manager and the scheduler
+- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
+- All static Pods get any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
+- All static Pods get any extra Volumes specified by the user (Host path)
+-->
+- 所有静态 Pod 都部署在 `kube-system` 命名空间
+- 所有静态 Pod 都获得 `tier:ontrol-plane` 和 `component:{component-name}` 标签
+- 所有静态 Pod 均使用 `system-node-critical` 优先级
+- 所有静态 Pod 都设置了 `hostNetwork:true`,使得控制平面在配置网络之前启动;结果导致:
+   * 控制器管理器和调度器用来调用 API 服务器的地址为 127.0.0.1。
+   * 如果使用本地 etcd 服务器,则 `etcd-servers` 地址将设置为 `127.0.0.1:2379`
+- 同时为控制器管理器和调度器启用了领导者选举
+- 控制器管理器和调度器将引用 kubeconfig 文件及其各自的唯一标识
+- 如[将自定义参数传递给控制平面组件](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)中所述,
+  所有静态 Pod 都会获得用户指定的额外标志
+- 所有静态 Pod 都会获得用户指定的额外卷(主机路径)
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. All images will be pulled from k8s.gcr.io by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. In case of kubeadm is executed in the `--dry-run` mode, static Pods files are written in a temporary folder
+3. Static Pod manifest generation for master components can be invoked individually with the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command
+-->
+1. 所有镜像默认从 k8s.gcr.io 拉取。 
+   关于自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果在 `--dry-run` 模式下执行 kubeadm,则静态 Pod 文件写入一个临时文件夹中
+3. 可以使用 [`kubeadm init phase control-plane all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) 
+   命令分别生成主控组件的静态 Pod 清单。
+
+<!-- #### API server -->
+#### API 服务器  {#api-server}
+
+<!-- 
+The static Pod manifest for the API server is affected by following parameters provided by the users: 
+-->
+API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
+
+<!--  
+ - The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those value defaults to the IP address of
+   the default network interface on the machine and port 6443
+ - The `service-cluster-ip-range` to use for services
+ - If an external etcd server is specified, the `etcd-servers` address and related TLS settings (`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`);
+   if an external etcd server is not be provided, a local etcd will be used (via host network)
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is configured, together with the  `--cloud-config` path
+   if such file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 要绑定的 `apiserver-advertise-address` 和 `apiserver-bind-port`;如果未提供,则这些值默认为机器上默认网络接口的 IP 地址和 6443 端口。
+  - `service-cluster-ip-range` 给 service 使用
+  - 如果指定了外部 etcd 服务器,则应指定 `etcd-servers` 地址和相关的 TLS 设置(`etcd-cafile`,`etcd-certfile`,`etcd-keyfile`);
+    如果未提供外部 etcd 服务器,则将使用本地 etcd(通过主机网络)
+  - 如果指定了云提供商,则配置相应的 `--cloud-provider`,如果该路径存在,则配置 `--cloud-config`
+    (这是实验性的,是 Alpha 版本,将在以后的版本中删除)
+
+<!-- Other API server flags that are set unconditionally are: -->
+无条件设置的其他 API 服务器标志有:
+
+<!--  
+ - `--insecure-port=0` to avoid insecure connections to the api server
+ - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--allow-privileged` to `true` (required e.g. by kube proxy)
+ - `--requestheader-client-ca-file` to `front-proxy-ca.crt`
+ - `--enable-admission-plugins` to:
+    - [`NamespaceLifecycle`](/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) e.g. to avoid deletion of
+      system reserved namespaces
+    - [`LimitRanger`](/docs/reference/access-authn-authz/admission-controllers/#limitranger) and [`ResourceQuota`](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) to enforce limits on namespaces
+    - [`ServiceAccount`](/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) to enforce service account automation
+    - [`PersistentVolumeLabel`](/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) attaches region or zone labels to
+      PersistentVolumes as defined by the cloud provider (This admission controller is deprecated and will be removed in a future version.
+      It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into using `gce` or `aws` as cloud providers)
+    - [`DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) to enforce default storage class on `PersistentVolumeClaim` objects
+    - [`DefaultTolerationSeconds`](/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) to limit what a kubelet can modify
+      (e.g. only pods on this node)
+ - `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl logs` and other API server-kubelet
+   communication work in environments where the hostnames of the nodes aren't resolvable
+ - Flags for using certificates generated in previous steps:
+    - `--client-ca-file` to `ca.crt`
+    - `--tls-cert-file` to `apiserver.crt`
+    - `--tls-private-key-file` to `apiserver.key`
+    - `--kubelet-client-certificate` to `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` to `apiserver-kubelet-client.key`
+    - `--service-account-key-file` to `sa.pub`
+    - `--requestheader-client-ca-file` to`front-proxy-ca.crt`
+    - `--proxy-client-cert-file` to `front-proxy-client.crt`
+    - `--proxy-client-key-file` to `front-proxy-client.key`
+ - Other flags for securing the front proxy ([API Aggregation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)) communications:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+-->
+ - `--insecure-port=0` 禁止到 API 服务器不安全的连接
+ - `--enable-bootstrap-token-auth=true` 启用 `BootstrapTokenAuthenticator` 身份验证模块
+   更多细节请参见 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+ - `--allow-privileged` 设为 `true`(必要,例如kube-proxy)
+ - `--requestheader-client-ca-file` 设为 `front-proxy-ca.crt`
+ - `--enable-admission-plugins` 设为:
+    - [`NamespaceLifecycle`](/zh/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) 
+      例如,避免删除系统保留的命名空间
+    - [`LimitRanger`](/zh/docs/reference/access-authn-authz/admission-controllers/#limitranger) 和
+      [`ResourceQuota`](/zh/docs/reference/access-authn-authz/admission-controllers/#resourcequota) 对命名空间实施限制
+    - [`ServiceAccount`](/zh/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) 实施服务账户自动化
+    - [`PersistentVolumeLabel`](/zh/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) 
+      将区域(Region)或区(Zone)标签附加到由云提供商定义的 PersistentVolumes(此准入控制器已被弃用并将在以后的版本中删除)。
+      如果未明确选择使用 `gce` 或 `aws` 作为云提供商,则默认情况下,v1.9 以后的版本 kubeadm 都不会部署。
+    - [`DefaultStorageClass`](/zh/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) 
+      在 `PersistentVolumeClaim` 对象上强制使用默认存储类型
+    - [`DefaultTolerationSeconds`](/zh/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction) 
+      限制 kubelet 可以修改的内容(例如,仅此节点上的 pod)
+ - `--kubelet-preferred-address-types` 设为 `InternalIP,ExternalIP,Hostname;` 
+   这使得在节点的主机名无法解析的环境中,`kubectl log` 和 API 服务器与 kubelet 的其他通信可以工作
+ - 使用在前面步骤中生成的证书的标志:
+    - `--client-ca-file` 设为 `ca.crt`
+    - `--tls-cert-file` 设为 `apiserver.crt`
+    - `--tls-private-key-file` 设为 `apiserver.key`
+    - `--kubelet-client-certificate` 设为 `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` 设为 `apiserver-kubelet-client.key`
+    - `--service-account-key-file` 设为 `sa.pub`
+    - `--requestheader-client-ca-file` 设为 `front-proxy-ca.crt`
+    - `--proxy-client-cert-file` 设为 `front-proxy-client.crt`
+    - `--proxy-client-key-file` 设为 `front-proxy-client.key`
+ - 其他用于保护前端代理([API 聚合](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md))通信的标志:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+
+<!-- #### Controller manager -->
+#### 控制器管理器  {#controller-manager}
+
+<!-- 
+The static Pod manifest for the controller-manager is affected by following parameters provided by the users: 
+-->
+控制器管理器的静态 Pod 清单受用户提供的以下参数的影响:
+
+<!-- 
+- If kubeadm is invoked specifying a `--pod-network-cidr`, the subnet manager feature required for some CNI network plugins is enabled by
+   setting:
+   - `--allocate-node-cidrs=true`
+   - `--cluster-cidr` and `--node-cidr-mask-size` flags according to the given CIDR
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is specified, together with the  `--cloud-config` path
+   if such configuration file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 如果调用 kubeadm 时指定了 `--pod-network-cidr` 参数,则可以通过以下方式启用某些 CNI 网络插件所需的子网管理器功能:
+    - 设置 `--allocate-node-cidrs=true`
+    - 根据给定 CIDR 设置 `--cluster-cidr` 和 `--node-cidr-mask-size` 标志
+  - 如果指定了云提供商,则指定相应的 `--cloud-provider`,如果存在这样的配置文件,则指定 `--cloud-config` 路径
+    (这是试验性的,是Alpha 版本,将在以后的版本中删除)
+
+<!-- Other flags that are set unconditionally are: -->
+其他无条件设置的标志包括:
+
+<!--  
+ - `--controllers` enabling all the default controllers plus `BootstrapSigner` and `TokenCleaner` controllers for TLS bootstrap.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--use-service-account-credentials` to `true`
+ - Flags for using certificates generated in previous steps:
+    - `--root-ca-file` to `ca.crt`
+    - `--cluster-signing-cert-file` to `ca.crt`, if External CA mode is disabled, otherwise to `""`
+    - `--cluster-signing-key-file` to `ca.key`, if External CA mode is disabled, otherwise to `""`
+    - `--service-account-private-key-file` to `sa.key`
+-->
+- `--controllers` 为 TLS 引导程序启用所有默认控制器以及 `BootstrapSigner` 和 `TokenCleaner` 控制器。
+    详细信息请参阅 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+  - `--use-service-account-credentials` 设为 `true`
+  - 使用先前步骤中生成的证书的标志:
+     -`--root-ca-file` 设为 `ca.crt`
+     - 如果禁用了 External CA 模式,则 `--cluster-signing-cert-file` 设为 `ca.crt`,否则设为 `""`
+     - 如果禁用了 External CA 模式,则 `--cluster-signing-key-file` 设为 `ca.key`,否则设为 `""`
+     - `--service-account-private-key-file` 设为 `sa.key`
+
+<!-- #### Scheduler -->
+#### 调度器  {#scheduler}
+
+<!-- 
+The static Pod manifest for the scheduler is not affected by parameters provided by the users. 
+-->
+调度器的静态 Pod 清单不受用户提供的参数的影响。
+
+<!-- ### Generate static Pod manifest for local etcd -->
+### 为本地 etcd 生成静态 Pod 清单  {#generate-static-pod-manifest-for-local-etcd}
+
+<!--  
+If the user specified an external etcd this step will be skipped, otherwise kubeadm generates a static Pod manifest file for creating
+a local etcd instance running in a Pod with following attributes:
+-->
+如果用户指定了外部 etcd,则将跳过此步骤,否则 kubeadm 会生成静态 Pod 清单文件,以创建在 Pod 中运行的具有以下属性的本地 etcd 实例:
+
+<!--  
+- listen on `localhost:2379` and use `HostNetwork=true`
+- make a `hostPath` mount out from the `dataDir` to the host's filesystem
+- Any extra flags specified by the user
+-->
+- 在 `localhost:2379` 上监听并使用 `HostNetwork=true`
+- 将 `hostPath` 从 `dataDir` 挂载到主机的文件系统
+- 用户指定的任何其他标志
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. The etcd image will be pulled from `k8s.gcr.io` by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. in case of kubeadm is executed in the `--dry-run` mode, the etcd static Pod manifest is written in a temporary folder
+3. Static Pod manifest generation for local etcd can be invoked individually with the [`kubeadm init phase etcd local`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) command
+-->
+1. etcd 镜像默认从 `k8s.gcr.io` 拉取。有关自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果 kubeadm 以 `--dry-run` 模式执行,etcd 静态 Pod 清单将写入一个临时文件夹
+3. 可以使用 ['kubeadm init phase etcd local'](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) 命令
+   单独为本地 etcd 生成静态 Pod 清单
+
+<!-- ### Optional Dynamic Kubelet Configuration -->
+### 可选的动态 Kubelet 配置  {#optional-dynamic-kubelet-configuration}
+
+<!--  
+To use this functionality call `kubeadm alpha kubelet config enable-dynamic`. It writes the kubelet init configuration
+into `/var/lib/kubelet/config/init/kubelet` file.
+-->
+要使用这个功能,请调用 `kubeadm alpha kubelet config enable-dynamic`。
+它将 kubelet 的 init 配置写入 `/var/lib/kubelet/config/init/kubelet` 文件。
+
+<!--  
+The init configuration is used for starting the kubelet on this specific node, providing an alternative for the kubelet drop-in file;
+such configuration will be replaced by the kubelet base configuration as described in following steps.
+See [set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file) for additional info.
+-->
+init 配置用于在这个特定节点上启动 kubelet,从而为 kubelet 插件文件提供了一种替代方法。
+如以下步骤中所述,这种配置将由 kubelet 基本配置所替代。
+请参阅[通过配置文件设置 Kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file)了解更多信息。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. To make dynamic kubelet configuration work, flag `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` should be specified
+   in `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
+2. The kubelet configuration can be changed by passing a `KubeletConfiguration` object to `kubeadm init` or `kubeadm join` by using
+   a configuration file `--config some-file.yaml`. The `KubeletConfiguration` object can be separated from other objects such
+   as `InitConfiguration` using the `---` separator. For more details have a look at the `kubeadm config print-default` command.
+-->
+1. 要使动态 kubelet 配置生效,应在 `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
+   中指定 `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` 标志
+2. 通过使用配置文件 `--config some-file.yaml` 将 `KubeletConfiguration` 对象传递给 `kubeadm init` 或 `kubeadm join`
+   来更改 kubelet 配置。可以使用 `---` 分隔符将 `KubeletConfiguration` 对象与其他对象(例如 `InitConfiguration`)分开。
+   有关更多详细信息,请查看 `kubeadm config print-default` 命令。
+
+<!-- ### Wait for the control plane to come up -->
+### 等待控制平面启动  {#wait-for-the-control-plane-to-come-up}
+
+<!--  
+kubeadm waits (upto 4m0s) until `localhost:6443/healthz` (kube-apiserver liveness) returns `ok`. However in order to detect
+deadlock conditions, kubeadm fails fast if `localhost:10255/healthz` (kubelet liveness) or
+`localhost:10255/healthz/syncloop` (kubelet readiness) don't return `ok` within 40s and 60s respectively.
+-->
+kubeadm 等待(最多 4m0s),直到 `localhost:6443/healthz`(kube-apiserver 存活)返回 `ok`。 
+但是为了检测死锁条件,如果 `localhost:10255/healthz`(kubelet 存活)或
+`localhost:10255/healthz/syncloop`(kubelet 就绪)未能在 40s 和 60s 内未返回 `ok`,则 kubeadm 会快速失败。
+
+<!--  
+kubeadm relies on the kubelet to pull the control plane images and run them properly as static Pods.
+After the control plane is up, kubeadm completes the tasks described in following paragraphs.
+-->
+kubeadm 依靠 kubelet 拉取控制平面镜像并将其作为静态 Pod 正确运行。
+控制平面启动后,kubeadm 将完成以下段落中描述的任务。
+
+<!-- ### (optional) Write base kubelet configuration -->
+### (可选)编写基本 kubelet 配置  {#write-base-kubelet-configuration}
+
+{{< feature-state for_k8s_version="v1.9" state="alpha" >}}
+
+<!-- If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig`: -->
+如果带 `--feature-gates=DynamicKubeletConfig` 参数调用 kubeadm:
+
+<!--  
+1. Write the kubelet base configuration into the `kubelet-base-config-v1.9` ConfigMap in the `kube-system` namespace
+2. Creates RBAC rules for granting read access to that ConfigMap to all bootstrap tokens and all kubelet instances
+   (that is `system:bootstrappers:kubeadm:default-node-token` and `system:nodes` groups)
+3. Enable the dynamic kubelet configuration feature for the initial control-plane node by pointing `Node.spec.configSource` to the newly-created ConfigMap
+-->
+1. 将 kubelet 基本配置写入 `kube-system` 命名空间的 `kubelet-base-config-v1.9` ConfigMap 中。
+2. 创建 RBAC 规则,以授予对所有引导令牌和所有 kubelet 实例对该 ConfigMap 的读取访问权限
+  (即 `system:bootstrappers:kubeadm:default-node-token` 组和 `system:nodes` 组)
+3. 通过将 `Node.spec.configSource` 指向新创建的 ConfigMap,为初始控制平面节点启用动态 kubelet 配置功能。
+
+<!-- ### Save the kubeadm ClusterConfiguration in a ConfigMap for later reference -->
+### 将 kubeadm ClusterConfiguration 保存在 ConfigMap 中以供以后参考  {#save-the-kubeadm-clusterConfiguration-in-a-configMap-for-later-reference}
+
+<!-- 
+kubeadm saves the configuration passed to `kubeadm init` in a ConfigMap named `kubeadm-config` under `kube-system` namespace. 
+-->
+kubeadm 将传递给 `kubeadm init` 的配置保存在 `kube-system` 命名空间下名为 `kubeadm-config` 的 ConfigMap 中。
+
+<!--  
+This will ensure that kubeadm actions executed in future (e.g `kubeadm upgrade`) will be able to determine the actual/current cluster
+state and make new decisions based on that data.
+-->
+这将确保将来执行的 kubeadm 操作(例如 `kubeadm upgrade`)将能够确定实际/当前集群状态,并根据该数据做出新的决策。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. Before saving the ClusterConfiguration, sensitive information like the token is stripped from the configuration
+2. Upload of master configuration can be invoked individually with the [`kubeadm init phase upload-config`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) command
+-->
+1. 在保存 ClusterConfiguration 之前,从配置中删除令牌等敏感信息。
+2. 可以使用 [`kubeadm init phase upload-config`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) 
+   命令单独上传主控节点配置。
+
+<!-- ### Mark the node as control-plane -->
+### 将节点标记为控制平面  {#mark-the-node-as-control-plane}
+
+<!-- As soon as the control plane is available, kubeadm executes following actions: -->
+一旦控制平面可用,kubeadm 将执行以下操作:
+
+<!-- 
+- Labels the node as control-plane with `node-role.kubernetes.io/master=""`
+- Taints the node with `node-role.kubernetes.io/master:NoSchedule`
+-->
+- 给节点打上 `node-role.kubernetes.io/master=""` 标签,标记为控制平面
+- 给节点打上 `node-role.kubernetes.io/master:NoSchedule` 污点
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. Mark control-plane phase can be invoked individually with the [`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-master) command
+-->
+1. 可以使用 [`kubeadm init phase mark-control-plane`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-master) 
+  命令单独触发控制平面标记
+
+<!-- ### Configure TLS-Bootstrapping for node joining -->c
+### 为即将加入的节点加入 TLS 启动引导  {#configure-tls-bootstrapping-for-node-joining}
+
+<!--
+Kubeadm uses [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) for joining new nodes to an
+existing cluster; for more details see also [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md).
+-->
+
+Kubeadm 使用[引导令牌认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)将新节点连接到现有集群;
+有关更多详细信息,请参见[设计方案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)。
+
+<!-- 
+`kubeadm init` ensures that everything is properly configured for this process, and this includes following steps as well as
+setting API server and controller flags as already described in previous paragraphs.
+-->
+`kubeadm init` 确保为该过程正确配置了所有内容,这包括以下步骤以及设置 API 服务器和控制器标志,如前几段所述。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. TLS bootstrapping for nodes can be configured with the [`kubeadm init phase bootstrap-token`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token)
+   command, executing all the configuration steps described in following paragraphs; alternatively, each step can be invoked individually
+-->
+1. 可以使用 [`kubeadm init phase bootstrap-token`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token) 
+   命令配置节点的 TLS 引导,执行以下段落中描述的所有配置步骤;或者每个步骤都ukey单独触发。
+
+<!-- #### Create a bootstrap token -->
+#### 创建引导令牌  {#create-a-bootstrap-token}
+
+<!--  
+`kubeadm init` create a first bootstrap token, either generated automatically or provided by the user with the `--token` flag; as documented
+in bootstrap token specification, token should be saved as secrets with name `bootstrap-token-<token-id>` under `kube-system` namespace.
+-->
+`kubeadm init` 创建第一个引导令牌,该令牌是自动生成的或由用户提供的 `--token` 标志的值;如引导令牌规范中记录的那样,
+令牌应保存在 `kube-system` 命名空间下名为 `bootstrap-token-<token-id>` 的 secret。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. The default token created by `kubeadm init` will be used to validate temporary user during TLS bootstrap process; those users will
+   be member of  `system:bootstrappers:kubeadm:default-node-token` group
+2. The token has a limited validity, default 24 hours (the interval may be changed with the `—token-ttl` flag)
+3. Additional tokens can be created with the [`kubeadm token`](/docs/reference/setup-tools/kubeadm/kubeadm-token/) command, that provide as well other useful functions
+   for token management
+-->
+1. 由 `kubeadm init` 创建的默认令牌将用于在 TLS 引导过程中验证临时用户;
+   这些用户会成为 `system:bootstrappers:kubeadm:default-node-token` 组的成员
+2. 令牌的有效期有限,默认为 24 小时(间隔可以通过 `-token-ttl` 标志进行更改)
+3. 可以使用 [`kubeadm token`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-token/) 命令创建其他令牌,
+   这些令牌还提供其他有用的令牌管理功能
+
+<!-- #### Allow joining nodes to call CSR API -->
+#### 允许加入的节点调用 CSR API  {#allow-joining-nodes-to-call-csr-api}
+
+<!-- Kubeadm ensures that users in  `system:bootstrappers:kubeadm:default-node-token` group are able to access the certificate signing API. -->
+Kubeadm 确保 `system:bootstrappers:kubeadm:default-node-token` 组中的用户能够访问证书签名 API。
+
+<!-- 
+This is implemented by creating a ClusterRoleBinding named `kubeadm:kubelet-bootstrap` between the group above and the default
+RBAC role `system:node-bootstrapper`.
+-->
+这是通过在上述组与默认 RBAC 角色 `system:node-bootstrapper` 之间创建名为 `kubeadm:kubelet-bootstrap` 的 ClusterRoleBinding 来实现的。
+
+
+<!-- #### Setup auto approval for new bootstrap tokens -->
+#### 为新的引导令牌设置自动批准  {#setup-auto-approval-for-new-bootstrap-tokens}
+
+<!-- Kubeadm ensures that the Bootstrap Token will get its CSR request automatically approved by the csrapprover controller.-->
+Kubeadm 确保 csrapprover 控制器自动批准引导令牌的 CSR 请求。
+
+<!-- 
+This is implemented by creating ClusterRoleBinding named `kubeadm:node-autoapprove-bootstrap` between
+the  `system:bootstrappers:kubeadm:default-node-token` group and the default role `system:certificates.k8s.io:certificatesigningrequests:nodeclient`.
+-->
+这是通过在 `system:bootstrappers:kubeadm:default-node-token` 组和 `system:certificates.k8s.io:certificatesigningrequests:nodeclient` 默认角色之间
+创建名为 `kubeadm:node-autoapprove-bootstrap` 的 ClusterRoleBinding 来实现的。
+
+<!-- 
+The role `system:certificates.k8s.io:certificatesigningrequests:nodeclient` should be created as well, granting
+POST permission to `/apis/certificates.k8s.io/certificatesigningrequests/nodeclient`.
+-->
+还应创建 `system:certificates.k8s.io:certificatesigningrequests:nodeclient` 角色,
+对 `/apis/certificates.k8s.io/certificatesigningrequests/nodeclient` 授予的 POST 权限。
+
+<!-- #### Setup nodes certificate rotation with auto approval -->
+#### 通过自动批准设置节点证书轮换 {#setup-nodes-certificate-rotation-with-auto-approval} 
+
+<!-- 
+Kubeadm ensures that certificate rotation is enabled for nodes, and that new certificate request for nodes will get its CSR request
+automatically approved by the csrapprover controller. 
+-->
+Kubeadm 确保节点启用了证书轮换,csrapprover 控制器将自动批准节点的新证书的 CSR 请求。
+
+<!-- 
+This is implemented by creating ClusterRoleBinding named `kubeadm:node-autoapprove-certificate-rotation` between the  `system:nodes` group
+and the default role `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`.
+-->
+这是通过在 `system:nodes` 组和 `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient` 默认角色之间创建名叫
+`kubeadm:node-autoapprove-certificate-rotation` 的 ClusterRoleBinding 实现的。
+
+<!-- #### Create the public cluster-info ConfigMap -->
+#### 创建公共 cluster-info ConfigMap
+
+<!-- This phase creates the `cluster-info` ConfigMap in the `kube-public` namespace. -->
+本步骤在 `kube-public` 命名空间中创建名为 `cluster-info` 的 ConfigMap。
+
+<!--  
+Additionally it creates a Role and a RoleBinding granting access to the ConfigMap for unauthenticated users
+(i.e. users in RBAC group `system:unauthenticated`).
+-->
+另外,它创建一个 Role 和一个 RoleBinding,为未经身份验证的用户授予对 ConfigMap 的访问权限
+(即 RBAC 组 `system:unauthenticated` 中的用户)。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. The access to the `cluster-info` ConfigMap _is not_ rate-limited. This may or may not be a problem if you expose your master
+to the internet; worst-case scenario here is a DoS attack where an attacker uses all the in-flight requests the kube-apiserver
+can handle to serving the `cluster-info` ConfigMap.
+-->
+1. 对 `cluster-info` ConfigMap 的访问 _不受_ 速率限制。如果你把主控节点暴露到外网,这可能是一个问题,也可能不是;
+这里最坏的情况是 DoS 攻击,攻击者使用 kube-apiserver 能够处理的所有动态请求来为 `cluster-info` ConfigMap 提供服务。
+
+<!-- ### Install addons -->
+### 安装插件  {##install-addons}
+
+<!-- Kubeadm installs the internal DNS server and the kube-proxy addon components via the API server. -->
+Kubeadm 通过 API 服务器安装内部 DNS 服务器和 kube-proxy 插件。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. This phase can be invoked individually with the [`kubeadm init phase addon all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon) command. 
+-->
+
+1. 此步骤可以调用 ['kubeadm init phase addon all'](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon) 命令单独调用。
+
+<!-- #### proxy -->
+#### 代理  {#proxy}
+
+<!-- 
+A ServiceAccount for `kube-proxy` is created in the `kube-system` namespace; then kube-proxy is deployed as a DaemonSet: 
+-->
+在 `kube-system` 命名空间中创建一个用于 `kube-proxy` 的 ServiceAccount;然后将 kube-proxy 部署为 DaemonSet:
在 `kube-system` 命名空间中创建一个用于 `kube-proxy` 的 ServiceAccount;然后以 DaemonSet 的方式部署 kube-proxy :

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-scheduler`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+
+<!-- 
+Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file.
+The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster.
+The embedded client certificate for admin should be in the `system:masters` organization, as defined by default
+[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a
+CN. Kubeadm uses the `kubernetes-admin` CN.
+-->
+另外,一个用于 kubeadm 本身和 admin 的 kubeconfig 文件也被生成并保存到 `/etc/kubernetes/admin.conf` 文件中。
+此处的 admin 定义为正在管理集群并希望完全控制集群(**root**)的实际人员。
+内嵌的 admin 客户端证书应s  `system:masters` 组织的成员,
+这是由默认的 [RBAC 面向用户的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#user-facing-roles)定义的。 
+它还应包括一个 CN。 Kubeadm 使用 `kubernetes-admin` CN。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. `ca.crt` certificate is embedded in all the kubeconfig files.
+2. If a given kubeconfig file exists, and its content is evaluated compliant with the above specs, the existing file will be used and the generation phase for the given kubeconfig skipped
+3. If kubeadm is running in [ExternalCA mode](/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode), all the required kubeconfig must be provided by the user as well, because kubeadm cannot generate any of them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, kubeconfig files are written in a temporary folder
+5. Kubeconfig files generation can be invoked individually with the [`kubeadm init phase kubeconfig all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig) command
+-->
+1. `ca.crt` 证书内嵌在所有 kubeconfig 文件中。
+2. 如果给定的 kubeconfig 文件存在且其内容经过评估符合上述规范,则 kubeadm 将使用现有文件,并跳过给定 kubeconfig 的生成阶段
+3. 如果 kubeadm 以 [ExternalCA 模式](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode)运行,
+   则所有必需的 kubeconfig 也必须由用户提供,因为 kubeadm 不能自己生成
+4. 如果在 `--dry-run` 模式下执行 kubeadm,则 kubeconfig 文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase kubeconfig all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig)
+   命令分别生成 Kubeconfig 文件。
+
+<!-- ### Generate static Pod manifests for control plane components -->
+### 为控制平面组件生成静态 Pod 清单  {#generate-static-pod-manifests-for-control-plane-components}
+
+<!--  
+Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`. The kubelet watches this directory for Pods to create on startup.
+-->
+Kubeadm 将用于控制平面组件的静态 Pod 清单文件写入 `/etc/kubernetes/manifests` 目录。
+Kubelet 启动后会监视这个目录以便创建 Pod。
+
+<!-- Static Pod manifest share a set of common properties: -->
+静态 Pod 清单有一些共同的属性:
+
+<!--  
+- All static Pods are deployed on `kube-system` namespace
+- All static Pods get `tier:control-plane` and `component:{component-name}` labels
+- All static Pods use the `system-node-critical` priority class
+- `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is configured; as a consequence:
+  * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1`
+  * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379`
+- Leader election is enabled for both the controller-manager and the scheduler
+- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
+- All static Pods get any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
+- All static Pods get any extra Volumes specified by the user (Host path)
+-->
+- 所有静态 Pod 都部署在 `kube-system` 命名空间
+- 所有静态 Pod 都获得 `tier:ontrol-plane` 和 `component:{component-name}` 标签
+- 所有静态 Pod 均使用 `system-node-critical` 优先级
+- 所有静态 Pod 都设置了 `hostNetwork:true`,使得控制平面在配置网络之前启动;结果导致:
+   * 控制器管理器和调度器用来调用 API 服务器的地址为 127.0.0.1。
+   * 如果使用本地 etcd 服务器,则 `etcd-servers` 地址将设置为 `127.0.0.1:2379`
+- 同时为控制器管理器和调度器启用了领导者选举
+- 控制器管理器和调度器将引用 kubeconfig 文件及其各自的唯一标识
+- 如[将自定义参数传递给控制平面组件](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)中所述,
+  所有静态 Pod 都会获得用户指定的额外标志
+- 所有静态 Pod 都会获得用户指定的额外卷(主机路径)
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. All images will be pulled from k8s.gcr.io by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. In case of kubeadm is executed in the `--dry-run` mode, static Pods files are written in a temporary folder
+3. Static Pod manifest generation for master components can be invoked individually with the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command
+-->
+1. 所有镜像默认从 k8s.gcr.io 拉取。 
+   关于自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果在 `--dry-run` 模式下执行 kubeadm,则静态 Pod 文件写入一个临时文件夹中
+3. 可以使用 [`kubeadm init phase control-plane all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) 
+   命令分别生成主控组件的静态 Pod 清单。
+
+<!-- #### API server -->
+#### API 服务器  {#api-server}
+
+<!-- 
+The static Pod manifest for the API server is affected by following parameters provided by the users: 
+-->
+API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
+
+<!--  
+ - The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those value defaults to the IP address of
+   the default network interface on the machine and port 6443
+ - The `service-cluster-ip-range` to use for services
+ - If an external etcd server is specified, the `etcd-servers` address and related TLS settings (`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`);
+   if an external etcd server is not be provided, a local etcd will be used (via host network)
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is configured, together with the  `--cloud-config` path
+   if such file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 要绑定的 `apiserver-advertise-address` 和 `apiserver-bind-port`;如果未提供,则这些值默认为机器上默认网络接口的 IP 地址和 6443 端口。
+  - `service-cluster-ip-range` 给 service 使用
+  - 如果指定了外部 etcd 服务器,则应指定 `etcd-servers` 地址和相关的 TLS 设置(`etcd-cafile`,`etcd-certfile`,`etcd-keyfile`);
+    如果未提供外部 etcd 服务器,则将使用本地 etcd(通过主机网络)
+  - 如果指定了云提供商,则配置相应的 `--cloud-provider`,如果该路径存在,则配置 `--cloud-config`
+    (这是实验性的,是 Alpha 版本,将在以后的版本中删除)
+
+<!-- Other API server flags that are set unconditionally are: -->
+无条件设置的其他 API 服务器标志有:
+
+<!--  
+ - `--insecure-port=0` to avoid insecure connections to the api server
+ - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--allow-privileged` to `true` (required e.g. by kube proxy)
+ - `--requestheader-client-ca-file` to `front-proxy-ca.crt`
+ - `--enable-admission-plugins` to:
+    - [`NamespaceLifecycle`](/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) e.g. to avoid deletion of
+      system reserved namespaces
+    - [`LimitRanger`](/docs/reference/access-authn-authz/admission-controllers/#limitranger) and [`ResourceQuota`](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) to enforce limits on namespaces
+    - [`ServiceAccount`](/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) to enforce service account automation
+    - [`PersistentVolumeLabel`](/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) attaches region or zone labels to
+      PersistentVolumes as defined by the cloud provider (This admission controller is deprecated and will be removed in a future version.
+      It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into using `gce` or `aws` as cloud providers)
+    - [`DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) to enforce default storage class on `PersistentVolumeClaim` objects
+    - [`DefaultTolerationSeconds`](/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) to limit what a kubelet can modify
+      (e.g. only pods on this node)
+ - `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl logs` and other API server-kubelet
+   communication work in environments where the hostnames of the nodes aren't resolvable
+ - Flags for using certificates generated in previous steps:
+    - `--client-ca-file` to `ca.crt`
+    - `--tls-cert-file` to `apiserver.crt`
+    - `--tls-private-key-file` to `apiserver.key`
+    - `--kubelet-client-certificate` to `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` to `apiserver-kubelet-client.key`
+    - `--service-account-key-file` to `sa.pub`
+    - `--requestheader-client-ca-file` to`front-proxy-ca.crt`
+    - `--proxy-client-cert-file` to `front-proxy-client.crt`
+    - `--proxy-client-key-file` to `front-proxy-client.key`
+ - Other flags for securing the front proxy ([API Aggregation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)) communications:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+-->
+ - `--insecure-port=0` 禁止到 API 服务器不安全的连接
+ - `--enable-bootstrap-token-auth=true` 启用 `BootstrapTokenAuthenticator` 身份验证模块
+   更多细节请参见 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+ - `--allow-privileged` 设为 `true`(必要,例如kube-proxy)
+ - `--requestheader-client-ca-file` 设为 `front-proxy-ca.crt`
+ - `--enable-admission-plugins` 设为:
+    - [`NamespaceLifecycle`](/zh/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) 
+      例如,避免删除系统保留的命名空间
+    - [`LimitRanger`](/zh/docs/reference/access-authn-authz/admission-controllers/#limitranger) 和
+      [`ResourceQuota`](/zh/docs/reference/access-authn-authz/admission-controllers/#resourcequota) 对命名空间实施限制
+    - [`ServiceAccount`](/zh/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) 实施服务账户自动化
+    - [`PersistentVolumeLabel`](/zh/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) 
+      将区域(Region)或区(Zone)标签附加到由云提供商定义的 PersistentVolumes(此准入控制器已被弃用并将在以后的版本中删除)。
+      如果未明确选择使用 `gce` 或 `aws` 作为云提供商,则默认情况下,v1.9 以后的版本 kubeadm 都不会部署。
+    - [`DefaultStorageClass`](/zh/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) 
+      在 `PersistentVolumeClaim` 对象上强制使用默认存储类型
+    - [`DefaultTolerationSeconds`](/zh/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction) 
+      限制 kubelet 可以修改的内容(例如,仅此节点上的 pod)
+ - `--kubelet-preferred-address-types` 设为 `InternalIP,ExternalIP,Hostname;` 
+   这使得在节点的主机名无法解析的环境中,`kubectl log` 和 API 服务器与 kubelet 的其他通信可以工作
+ - 使用在前面步骤中生成的证书的标志:
+    - `--client-ca-file` 设为 `ca.crt`
+    - `--tls-cert-file` 设为 `apiserver.crt`
+    - `--tls-private-key-file` 设为 `apiserver.key`
+    - `--kubelet-client-certificate` 设为 `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` 设为 `apiserver-kubelet-client.key`
+    - `--service-account-key-file` 设为 `sa.pub`
+    - `--requestheader-client-ca-file` 设为 `front-proxy-ca.crt`
+    - `--proxy-client-cert-file` 设为 `front-proxy-client.crt`
+    - `--proxy-client-key-file` 设为 `front-proxy-client.key`
+ - 其他用于保护前端代理([API 聚合](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md))通信的标志:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+
+<!-- #### Controller manager -->
+#### 控制器管理器  {#controller-manager}
+
+<!-- 
+The static Pod manifest for the controller-manager is affected by following parameters provided by the users: 
+-->
+控制器管理器的静态 Pod 清单受用户提供的以下参数的影响:
+
+<!-- 
+- If kubeadm is invoked specifying a `--pod-network-cidr`, the subnet manager feature required for some CNI network plugins is enabled by
+   setting:
+   - `--allocate-node-cidrs=true`
+   - `--cluster-cidr` and `--node-cidr-mask-size` flags according to the given CIDR
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is specified, together with the  `--cloud-config` path
+   if such configuration file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 如果调用 kubeadm 时指定了 `--pod-network-cidr` 参数,则可以通过以下方式启用某些 CNI 网络插件所需的子网管理器功能:
+    - 设置 `--allocate-node-cidrs=true`
+    - 根据给定 CIDR 设置 `--cluster-cidr` 和 `--node-cidr-mask-size` 标志
+  - 如果指定了云提供商,则指定相应的 `--cloud-provider`,如果存在这样的配置文件,则指定 `--cloud-config` 路径
+    (这是试验性的,是Alpha 版本,将在以后的版本中删除)
+
+<!-- Other flags that are set unconditionally are: -->
+其他无条件设置的标志包括:
+
+<!--  
+ - `--controllers` enabling all the default controllers plus `BootstrapSigner` and `TokenCleaner` controllers for TLS bootstrap.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--use-service-account-credentials` to `true`
+ - Flags for using certificates generated in previous steps:
+    - `--root-ca-file` to `ca.crt`
+    - `--cluster-signing-cert-file` to `ca.crt`, if External CA mode is disabled, otherwise to `""`
+    - `--cluster-signing-key-file` to `ca.key`, if External CA mode is disabled, otherwise to `""`
+    - `--service-account-private-key-file` to `sa.key`
+-->
+- `--controllers` 为 TLS 引导程序启用所有默认控制器以及 `BootstrapSigner` 和 `TokenCleaner` 控制器。
+    详细信息请参阅 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+  - `--use-service-account-credentials` 设为 `true`
+  - 使用先前步骤中生成的证书的标志:
+     -`--root-ca-file` 设为 `ca.crt`
+     - 如果禁用了 External CA 模式,则 `--cluster-signing-cert-file` 设为 `ca.crt`,否则设为 `""`
+     - 如果禁用了 External CA 模式,则 `--cluster-signing-key-file` 设为 `ca.key`,否则设为 `""`
+     - `--service-account-private-key-file` 设为 `sa.key`
+
+<!-- #### Scheduler -->
+#### 调度器  {#scheduler}
+
+<!-- 
+The static Pod manifest for the scheduler is not affected by parameters provided by the users. 
+-->
+调度器的静态 Pod 清单不受用户提供的参数的影响。
+
+<!-- ### Generate static Pod manifest for local etcd -->
+### 为本地 etcd 生成静态 Pod 清单  {#generate-static-pod-manifest-for-local-etcd}
+
+<!--  
+If the user specified an external etcd this step will be skipped, otherwise kubeadm generates a static Pod manifest file for creating
+a local etcd instance running in a Pod with following attributes:
+-->
+如果用户指定了外部 etcd,则将跳过此步骤,否则 kubeadm 会生成静态 Pod 清单文件,以创建在 Pod 中运行的具有以下属性的本地 etcd 实例:
+
+<!--  
+- listen on `localhost:2379` and use `HostNetwork=true`
+- make a `hostPath` mount out from the `dataDir` to the host's filesystem
+- Any extra flags specified by the user
+-->
+- 在 `localhost:2379` 上监听并使用 `HostNetwork=true`
+- 将 `hostPath` 从 `dataDir` 挂载到主机的文件系统
+- 用户指定的任何其他标志
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. The etcd image will be pulled from `k8s.gcr.io` by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. in case of kubeadm is executed in the `--dry-run` mode, the etcd static Pod manifest is written in a temporary folder
+3. Static Pod manifest generation for local etcd can be invoked individually with the [`kubeadm init phase etcd local`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) command
+-->
+1. etcd 镜像默认从 `k8s.gcr.io` 拉取。有关自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果 kubeadm 以 `--dry-run` 模式执行,etcd 静态 Pod 清单将写入一个临时文件夹
+3. 可以使用 ['kubeadm init phase etcd local'](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) 命令
+   单独为本地 etcd 生成静态 Pod 清单
+
+<!-- ### Optional Dynamic Kubelet Configuration -->
+### 可选的动态 Kubelet 配置  {#optional-dynamic-kubelet-configuration}
+
+<!--  
+To use this functionality call `kubeadm alpha kubelet config enable-dynamic`. It writes the kubelet init configuration
+into `/var/lib/kubelet/config/init/kubelet` file.
+-->
+要使用这个功能,请调用 `kubeadm alpha kubelet config enable-dynamic`。
+它将 kubelet 的 init 配置写入 `/var/lib/kubelet/config/init/kubelet` 文件。
+
+<!--  
+The init configuration is used for starting the kubelet on this specific node, providing an alternative for the kubelet drop-in file;
+such configuration will be replaced by the kubelet base configuration as described in following steps.
+See [set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file) for additional info.
+-->
+init 配置用于在这个特定节点上启动 kubelet,从而为 kubelet 插件文件提供了一种替代方法。
+如以下步骤中所述,这种配置将由 kubelet 基本配置所替代。
+请参阅[通过配置文件设置 Kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file)了解更多信息。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. To make dynamic kubelet configuration work, flag `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` should be specified
+   in `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
+2. The kubelet configuration can be changed by passing a `KubeletConfiguration` object to `kubeadm init` or `kubeadm join` by using
+   a configuration file `--config some-file.yaml`. The `KubeletConfiguration` object can be separated from other objects such
+   as `InitConfiguration` using the `---` separator. For more details have a look at the `kubeadm config print-default` command.
+-->
+1. 要使动态 kubelet 配置生效,应在 `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
+   中指定 `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` 标志
+2. 通过使用配置文件 `--config some-file.yaml` 将 `KubeletConfiguration` 对象传递给 `kubeadm init` 或 `kubeadm join`
+   来更改 kubelet 配置。可以使用 `---` 分隔符将 `KubeletConfiguration` 对象与其他对象(例如 `InitConfiguration`)分开。
+   有关更多详细信息,请查看 `kubeadm config print-default` 命令。
+
+<!-- ### Wait for the control plane to come up -->
+### 等待控制平面启动  {#wait-for-the-control-plane-to-come-up}
+
+<!--  
+kubeadm waits (upto 4m0s) until `localhost:6443/healthz` (kube-apiserver liveness) returns `ok`. However in order to detect
+deadlock conditions, kubeadm fails fast if `localhost:10255/healthz` (kubelet liveness) or
+`localhost:10255/healthz/syncloop` (kubelet readiness) don't return `ok` within 40s and 60s respectively.
+-->
+kubeadm 等待(最多 4m0s),直到 `localhost:6443/healthz`(kube-apiserver 存活)返回 `ok`。 
+但是为了检测死锁条件,如果 `localhost:10255/healthz`(kubelet 存活)或
+`localhost:10255/healthz/syncloop`(kubelet 就绪)未能在 40s 和 60s 内未返回 `ok`,则 kubeadm 会快速失败。
+
+<!--  
+kubeadm relies on the kubelet to pull the control plane images and run them properly as static Pods.
+After the control plane is up, kubeadm completes the tasks described in following paragraphs.
+-->
+kubeadm 依靠 kubelet 拉取控制平面镜像并将其作为静态 Pod 正确运行。
+控制平面启动后,kubeadm 将完成以下段落中描述的任务。
+
+<!-- ### (optional) Write base kubelet configuration -->
+### (可选)编写基本 kubelet 配置  {#write-base-kubelet-configuration}
+
+{{< feature-state for_k8s_version="v1.9" state="alpha" >}}
+
+<!-- If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig`: -->
+如果带 `--feature-gates=DynamicKubeletConfig` 参数调用 kubeadm:
+
+<!--  
+1. Write the kubelet base configuration into the `kubelet-base-config-v1.9` ConfigMap in the `kube-system` namespace
+2. Creates RBAC rules for granting read access to that ConfigMap to all bootstrap tokens and all kubelet instances
+   (that is `system:bootstrappers:kubeadm:default-node-token` and `system:nodes` groups)
+3. Enable the dynamic kubelet configuration feature for the initial control-plane node by pointing `Node.spec.configSource` to the newly-created ConfigMap
+-->
+1. 将 kubelet 基本配置写入 `kube-system` 命名空间的 `kubelet-base-config-v1.9` ConfigMap 中。
+2. 创建 RBAC 规则,以授予对所有引导令牌和所有 kubelet 实例对该 ConfigMap 的读取访问权限
+  (即 `system:bootstrappers:kubeadm:default-node-token` 组和 `system:nodes` 组)
+3. 通过将 `Node.spec.configSource` 指向新创建的 ConfigMap,为初始控制平面节点启用动态 kubelet 配置功能。
+
+<!-- ### Save the kubeadm ClusterConfiguration in a ConfigMap for later reference -->
+### 将 kubeadm ClusterConfiguration 保存在 ConfigMap 中以供以后参考  {#save-the-kubeadm-clusterConfiguration-in-a-configMap-for-later-reference}
+
+<!-- 
+kubeadm saves the configuration passed to `kubeadm init` in a ConfigMap named `kubeadm-config` under `kube-system` namespace. 
+-->
+kubeadm 将传递给 `kubeadm init` 的配置保存在 `kube-system` 命名空间下名为 `kubeadm-config` 的 ConfigMap 中。
+
+<!--  
+This will ensure that kubeadm actions executed in future (e.g `kubeadm upgrade`) will be able to determine the actual/current cluster
+state and make new decisions based on that data.
+-->
+这将确保将来执行的 kubeadm 操作(例如 `kubeadm upgrade`)将能够确定实际/当前集群状态,并根据该数据做出新的决策。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. Before saving the ClusterConfiguration, sensitive information like the token is stripped from the configuration
+2. Upload of master configuration can be invoked individually with the [`kubeadm init phase upload-config`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) command
+-->
+1. 在保存 ClusterConfiguration 之前,从配置中删除令牌等敏感信息。
+2. 可以使用 [`kubeadm init phase upload-config`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) 
+   命令单独上传主控节点配置。
+
+<!-- ### Mark the node as control-plane -->
+### 将节点标记为控制平面  {#mark-the-node-as-control-plane}
+
+<!-- As soon as the control plane is available, kubeadm executes following actions: -->
+一旦控制平面可用,kubeadm 将执行以下操作:
+
+<!-- 
+- Labels the node as control-plane with `node-role.kubernetes.io/master=""`
+- Taints the node with `node-role.kubernetes.io/master:NoSchedule`
+-->
+- 给节点打上 `node-role.kubernetes.io/master=""` 标签,标记为控制平面
+- 给节点打上 `node-role.kubernetes.io/master:NoSchedule` 污点
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. Mark control-plane phase can be invoked individually with the [`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-master) command
+-->
+1. 可以使用 [`kubeadm init phase mark-control-plane`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-master) 
+  命令单独触发控制平面标记
+
+<!-- ### Configure TLS-Bootstrapping for node joining -->c
+### 为即将加入的节点加入 TLS 启动引导  {#configure-tls-bootstrapping-for-node-joining}
+
+<!--
+Kubeadm uses [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) for joining new nodes to an
+existing cluster; for more details see also [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md).
+-->
+
+Kubeadm 使用[引导令牌认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)将新节点连接到现有集群;
+有关更多详细信息,请参见[设计方案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)。
+
+<!-- 
+`kubeadm init` ensures that everything is properly configured for this process, and this includes following steps as well as
+setting API server and controller flags as already described in previous paragraphs.
+-->
+`kubeadm init` 确保为该过程正确配置了所有内容,这包括以下步骤以及设置 API 服务器和控制器标志,如前几段所述。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. TLS bootstrapping for nodes can be configured with the [`kubeadm init phase bootstrap-token`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token)
+   command, executing all the configuration steps described in following paragraphs; alternatively, each step can be invoked individually
+-->
+1. 可以使用 [`kubeadm init phase bootstrap-token`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token) 
+   命令配置节点的 TLS 引导,执行以下段落中描述的所有配置步骤;或者每个步骤都ukey单独触发。
+
+<!-- #### Create a bootstrap token -->
+#### 创建引导令牌  {#create-a-bootstrap-token}
+
+<!--  
+`kubeadm init` create a first bootstrap token, either generated automatically or provided by the user with the `--token` flag; as documented
+in bootstrap token specification, token should be saved as secrets with name `bootstrap-token-<token-id>` under `kube-system` namespace.
+-->
+`kubeadm init` 创建第一个引导令牌,该令牌是自动生成的或由用户提供的 `--token` 标志的值;如引导令牌规范中记录的那样,
+令牌应保存在 `kube-system` 命名空间下名为 `bootstrap-token-<token-id>` 的 secret。
令牌应保存在 `kube-system` 命名空间下名为 `bootstrap-token-<token-id>` 的 secret 中。

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-scheduler`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+
+<!-- 
+Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file.
+The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster.
+The embedded client certificate for admin should be in the `system:masters` organization, as defined by default
+[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a
+CN. Kubeadm uses the `kubernetes-admin` CN.
+-->
+另外,一个用于 kubeadm 本身和 admin 的 kubeconfig 文件也被生成并保存到 `/etc/kubernetes/admin.conf` 文件中。
+此处的 admin 定义为正在管理集群并希望完全控制集群(**root**)的实际人员。
+内嵌的 admin 客户端证书应s  `system:masters` 组织的成员,
+这是由默认的 [RBAC 面向用户的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#user-facing-roles)定义的。 
+它还应包括一个 CN。 Kubeadm 使用 `kubernetes-admin` CN。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. `ca.crt` certificate is embedded in all the kubeconfig files.
+2. If a given kubeconfig file exists, and its content is evaluated compliant with the above specs, the existing file will be used and the generation phase for the given kubeconfig skipped
+3. If kubeadm is running in [ExternalCA mode](/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode), all the required kubeconfig must be provided by the user as well, because kubeadm cannot generate any of them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, kubeconfig files are written in a temporary folder
+5. Kubeconfig files generation can be invoked individually with the [`kubeadm init phase kubeconfig all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig) command
+-->
+1. `ca.crt` 证书内嵌在所有 kubeconfig 文件中。
+2. 如果给定的 kubeconfig 文件存在且其内容经过评估符合上述规范,则 kubeadm 将使用现有文件,并跳过给定 kubeconfig 的生成阶段
+3. 如果 kubeadm 以 [ExternalCA 模式](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode)运行,
+   则所有必需的 kubeconfig 也必须由用户提供,因为 kubeadm 不能自己生成
+4. 如果在 `--dry-run` 模式下执行 kubeadm,则 kubeconfig 文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase kubeconfig all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig)
+   命令分别生成 Kubeconfig 文件。
+
+<!-- ### Generate static Pod manifests for control plane components -->
+### 为控制平面组件生成静态 Pod 清单  {#generate-static-pod-manifests-for-control-plane-components}
+
+<!--  
+Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`. The kubelet watches this directory for Pods to create on startup.
+-->
+Kubeadm 将用于控制平面组件的静态 Pod 清单文件写入 `/etc/kubernetes/manifests` 目录。
+Kubelet 启动后会监视这个目录以便创建 Pod。
+
+<!-- Static Pod manifest share a set of common properties: -->
+静态 Pod 清单有一些共同的属性:
+
+<!--  
+- All static Pods are deployed on `kube-system` namespace
+- All static Pods get `tier:control-plane` and `component:{component-name}` labels
+- All static Pods use the `system-node-critical` priority class
+- `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is configured; as a consequence:
+  * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1`
+  * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379`
+- Leader election is enabled for both the controller-manager and the scheduler
+- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
+- All static Pods get any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
+- All static Pods get any extra Volumes specified by the user (Host path)
+-->
+- 所有静态 Pod 都部署在 `kube-system` 命名空间
+- 所有静态 Pod 都获得 `tier:ontrol-plane` 和 `component:{component-name}` 标签
+- 所有静态 Pod 均使用 `system-node-critical` 优先级
+- 所有静态 Pod 都设置了 `hostNetwork:true`,使得控制平面在配置网络之前启动;结果导致:
+   * 控制器管理器和调度器用来调用 API 服务器的地址为 127.0.0.1。
+   * 如果使用本地 etcd 服务器,则 `etcd-servers` 地址将设置为 `127.0.0.1:2379`
+- 同时为控制器管理器和调度器启用了领导者选举
+- 控制器管理器和调度器将引用 kubeconfig 文件及其各自的唯一标识
+- 如[将自定义参数传递给控制平面组件](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)中所述,
+  所有静态 Pod 都会获得用户指定的额外标志
+- 所有静态 Pod 都会获得用户指定的额外卷(主机路径)
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. All images will be pulled from k8s.gcr.io by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. In case of kubeadm is executed in the `--dry-run` mode, static Pods files are written in a temporary folder
+3. Static Pod manifest generation for master components can be invoked individually with the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command
+-->
+1. 所有镜像默认从 k8s.gcr.io 拉取。 
+   关于自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果在 `--dry-run` 模式下执行 kubeadm,则静态 Pod 文件写入一个临时文件夹中
+3. 可以使用 [`kubeadm init phase control-plane all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) 
+   命令分别生成主控组件的静态 Pod 清单。
+
+<!-- #### API server -->
+#### API 服务器  {#api-server}
+
+<!-- 
+The static Pod manifest for the API server is affected by following parameters provided by the users: 
+-->
+API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
+
+<!--  
+ - The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those value defaults to the IP address of
+   the default network interface on the machine and port 6443
+ - The `service-cluster-ip-range` to use for services
+ - If an external etcd server is specified, the `etcd-servers` address and related TLS settings (`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`);
+   if an external etcd server is not be provided, a local etcd will be used (via host network)
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is configured, together with the  `--cloud-config` path
+   if such file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 要绑定的 `apiserver-advertise-address` 和 `apiserver-bind-port`;如果未提供,则这些值默认为机器上默认网络接口的 IP 地址和 6443 端口。
+  - `service-cluster-ip-range` 给 service 使用
+  - 如果指定了外部 etcd 服务器,则应指定 `etcd-servers` 地址和相关的 TLS 设置(`etcd-cafile`,`etcd-certfile`,`etcd-keyfile`);
+    如果未提供外部 etcd 服务器,则将使用本地 etcd(通过主机网络)
+  - 如果指定了云提供商,则配置相应的 `--cloud-provider`,如果该路径存在,则配置 `--cloud-config`
+    (这是实验性的,是 Alpha 版本,将在以后的版本中删除)
+
+<!-- Other API server flags that are set unconditionally are: -->
+无条件设置的其他 API 服务器标志有:
+
+<!--  
+ - `--insecure-port=0` to avoid insecure connections to the api server
+ - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--allow-privileged` to `true` (required e.g. by kube proxy)
+ - `--requestheader-client-ca-file` to `front-proxy-ca.crt`
+ - `--enable-admission-plugins` to:
+    - [`NamespaceLifecycle`](/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) e.g. to avoid deletion of
+      system reserved namespaces
+    - [`LimitRanger`](/docs/reference/access-authn-authz/admission-controllers/#limitranger) and [`ResourceQuota`](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) to enforce limits on namespaces
+    - [`ServiceAccount`](/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) to enforce service account automation
+    - [`PersistentVolumeLabel`](/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) attaches region or zone labels to
+      PersistentVolumes as defined by the cloud provider (This admission controller is deprecated and will be removed in a future version.
+      It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into using `gce` or `aws` as cloud providers)
+    - [`DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) to enforce default storage class on `PersistentVolumeClaim` objects
+    - [`DefaultTolerationSeconds`](/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) to limit what a kubelet can modify
+      (e.g. only pods on this node)
+ - `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl logs` and other API server-kubelet
+   communication work in environments where the hostnames of the nodes aren't resolvable
+ - Flags for using certificates generated in previous steps:
+    - `--client-ca-file` to `ca.crt`
+    - `--tls-cert-file` to `apiserver.crt`
+    - `--tls-private-key-file` to `apiserver.key`
+    - `--kubelet-client-certificate` to `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` to `apiserver-kubelet-client.key`
+    - `--service-account-key-file` to `sa.pub`
+    - `--requestheader-client-ca-file` to`front-proxy-ca.crt`
+    - `--proxy-client-cert-file` to `front-proxy-client.crt`
+    - `--proxy-client-key-file` to `front-proxy-client.key`
+ - Other flags for securing the front proxy ([API Aggregation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)) communications:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+-->
+ - `--insecure-port=0` 禁止到 API 服务器不安全的连接
+ - `--enable-bootstrap-token-auth=true` 启用 `BootstrapTokenAuthenticator` 身份验证模块
+   更多细节请参见 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+ - `--allow-privileged` 设为 `true`(必要,例如kube-proxy)
+ - `--requestheader-client-ca-file` 设为 `front-proxy-ca.crt`
+ - `--enable-admission-plugins` 设为:
+    - [`NamespaceLifecycle`](/zh/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) 
+      例如,避免删除系统保留的命名空间
+    - [`LimitRanger`](/zh/docs/reference/access-authn-authz/admission-controllers/#limitranger) 和
+      [`ResourceQuota`](/zh/docs/reference/access-authn-authz/admission-controllers/#resourcequota) 对命名空间实施限制
+    - [`ServiceAccount`](/zh/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) 实施服务账户自动化
+    - [`PersistentVolumeLabel`](/zh/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) 
+      将区域(Region)或区(Zone)标签附加到由云提供商定义的 PersistentVolumes(此准入控制器已被弃用并将在以后的版本中删除)。
+      如果未明确选择使用 `gce` 或 `aws` 作为云提供商,则默认情况下,v1.9 以后的版本 kubeadm 都不会部署。
+    - [`DefaultStorageClass`](/zh/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) 
+      在 `PersistentVolumeClaim` 对象上强制使用默认存储类型
+    - [`DefaultTolerationSeconds`](/zh/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction) 
+      限制 kubelet 可以修改的内容(例如,仅此节点上的 pod)
+ - `--kubelet-preferred-address-types` 设为 `InternalIP,ExternalIP,Hostname;` 
+   这使得在节点的主机名无法解析的环境中,`kubectl log` 和 API 服务器与 kubelet 的其他通信可以工作
+ - 使用在前面步骤中生成的证书的标志:
+    - `--client-ca-file` 设为 `ca.crt`
+    - `--tls-cert-file` 设为 `apiserver.crt`
+    - `--tls-private-key-file` 设为 `apiserver.key`
+    - `--kubelet-client-certificate` 设为 `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` 设为 `apiserver-kubelet-client.key`
+    - `--service-account-key-file` 设为 `sa.pub`
+    - `--requestheader-client-ca-file` 设为 `front-proxy-ca.crt`
+    - `--proxy-client-cert-file` 设为 `front-proxy-client.crt`
+    - `--proxy-client-key-file` 设为 `front-proxy-client.key`
+ - 其他用于保护前端代理([API 聚合](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md))通信的标志:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+
+<!-- #### Controller manager -->
+#### 控制器管理器  {#controller-manager}
+
+<!-- 
+The static Pod manifest for the controller-manager is affected by following parameters provided by the users: 
+-->
+控制器管理器的静态 Pod 清单受用户提供的以下参数的影响:
+
+<!-- 
+- If kubeadm is invoked specifying a `--pod-network-cidr`, the subnet manager feature required for some CNI network plugins is enabled by
+   setting:
+   - `--allocate-node-cidrs=true`
+   - `--cluster-cidr` and `--node-cidr-mask-size` flags according to the given CIDR
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is specified, together with the  `--cloud-config` path
+   if such configuration file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 如果调用 kubeadm 时指定了 `--pod-network-cidr` 参数,则可以通过以下方式启用某些 CNI 网络插件所需的子网管理器功能:
+    - 设置 `--allocate-node-cidrs=true`
+    - 根据给定 CIDR 设置 `--cluster-cidr` 和 `--node-cidr-mask-size` 标志
+  - 如果指定了云提供商,则指定相应的 `--cloud-provider`,如果存在这样的配置文件,则指定 `--cloud-config` 路径
+    (这是试验性的,是Alpha 版本,将在以后的版本中删除)
+
+<!-- Other flags that are set unconditionally are: -->
+其他无条件设置的标志包括:
+
+<!--  
+ - `--controllers` enabling all the default controllers plus `BootstrapSigner` and `TokenCleaner` controllers for TLS bootstrap.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--use-service-account-credentials` to `true`
+ - Flags for using certificates generated in previous steps:
+    - `--root-ca-file` to `ca.crt`
+    - `--cluster-signing-cert-file` to `ca.crt`, if External CA mode is disabled, otherwise to `""`
+    - `--cluster-signing-key-file` to `ca.key`, if External CA mode is disabled, otherwise to `""`
+    - `--service-account-private-key-file` to `sa.key`
+-->
+- `--controllers` 为 TLS 引导程序启用所有默认控制器以及 `BootstrapSigner` 和 `TokenCleaner` 控制器。
+    详细信息请参阅 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+  - `--use-service-account-credentials` 设为 `true`
+  - 使用先前步骤中生成的证书的标志:
+     -`--root-ca-file` 设为 `ca.crt`
+     - 如果禁用了 External CA 模式,则 `--cluster-signing-cert-file` 设为 `ca.crt`,否则设为 `""`
+     - 如果禁用了 External CA 模式,则 `--cluster-signing-key-file` 设为 `ca.key`,否则设为 `""`
+     - `--service-account-private-key-file` 设为 `sa.key`
+
+<!-- #### Scheduler -->
+#### 调度器  {#scheduler}
+
+<!-- 
+The static Pod manifest for the scheduler is not affected by parameters provided by the users. 
+-->
+调度器的静态 Pod 清单不受用户提供的参数的影响。
+
+<!-- ### Generate static Pod manifest for local etcd -->
+### 为本地 etcd 生成静态 Pod 清单  {#generate-static-pod-manifest-for-local-etcd}
+
+<!--  
+If the user specified an external etcd this step will be skipped, otherwise kubeadm generates a static Pod manifest file for creating
+a local etcd instance running in a Pod with following attributes:
+-->
+如果用户指定了外部 etcd,则将跳过此步骤,否则 kubeadm 会生成静态 Pod 清单文件,以创建在 Pod 中运行的具有以下属性的本地 etcd 实例:
+
+<!--  
+- listen on `localhost:2379` and use `HostNetwork=true`
+- make a `hostPath` mount out from the `dataDir` to the host's filesystem
+- Any extra flags specified by the user
+-->
+- 在 `localhost:2379` 上监听并使用 `HostNetwork=true`
+- 将 `hostPath` 从 `dataDir` 挂载到主机的文件系统
+- 用户指定的任何其他标志
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. The etcd image will be pulled from `k8s.gcr.io` by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. in case of kubeadm is executed in the `--dry-run` mode, the etcd static Pod manifest is written in a temporary folder
+3. Static Pod manifest generation for local etcd can be invoked individually with the [`kubeadm init phase etcd local`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) command
+-->
+1. etcd 镜像默认从 `k8s.gcr.io` 拉取。有关自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果 kubeadm 以 `--dry-run` 模式执行,etcd 静态 Pod 清单将写入一个临时文件夹
+3. 可以使用 ['kubeadm init phase etcd local'](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) 命令
+   单独为本地 etcd 生成静态 Pod 清单
+
+<!-- ### Optional Dynamic Kubelet Configuration -->
+### 可选的动态 Kubelet 配置  {#optional-dynamic-kubelet-configuration}
+
+<!--  
+To use this functionality call `kubeadm alpha kubelet config enable-dynamic`. It writes the kubelet init configuration
+into `/var/lib/kubelet/config/init/kubelet` file.
+-->
+要使用这个功能,请调用 `kubeadm alpha kubelet config enable-dynamic`。
+它将 kubelet 的 init 配置写入 `/var/lib/kubelet/config/init/kubelet` 文件。
+
+<!--  
+The init configuration is used for starting the kubelet on this specific node, providing an alternative for the kubelet drop-in file;
+such configuration will be replaced by the kubelet base configuration as described in following steps.
+See [set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file) for additional info.
+-->
+init 配置用于在这个特定节点上启动 kubelet,从而为 kubelet 插件文件提供了一种替代方法。
+如以下步骤中所述,这种配置将由 kubelet 基本配置所替代。
+请参阅[通过配置文件设置 Kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file)了解更多信息。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. To make dynamic kubelet configuration work, flag `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` should be specified
+   in `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
+2. The kubelet configuration can be changed by passing a `KubeletConfiguration` object to `kubeadm init` or `kubeadm join` by using
+   a configuration file `--config some-file.yaml`. The `KubeletConfiguration` object can be separated from other objects such
+   as `InitConfiguration` using the `---` separator. For more details have a look at the `kubeadm config print-default` command.
+-->
+1. 要使动态 kubelet 配置生效,应在 `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
+   中指定 `--dynamic-config-dir=/var/lib/kubelet/config/dynamic` 标志
+2. 通过使用配置文件 `--config some-file.yaml` 将 `KubeletConfiguration` 对象传递给 `kubeadm init` 或 `kubeadm join`
+   来更改 kubelet 配置。可以使用 `---` 分隔符将 `KubeletConfiguration` 对象与其他对象(例如 `InitConfiguration`)分开。
+   有关更多详细信息,请查看 `kubeadm config print-default` 命令。
+
+<!-- ### Wait for the control plane to come up -->
+### 等待控制平面启动  {#wait-for-the-control-plane-to-come-up}
+
+<!--  
+kubeadm waits (upto 4m0s) until `localhost:6443/healthz` (kube-apiserver liveness) returns `ok`. However in order to detect
+deadlock conditions, kubeadm fails fast if `localhost:10255/healthz` (kubelet liveness) or
+`localhost:10255/healthz/syncloop` (kubelet readiness) don't return `ok` within 40s and 60s respectively.
+-->
+kubeadm 等待(最多 4m0s),直到 `localhost:6443/healthz`(kube-apiserver 存活)返回 `ok`。 
+但是为了检测死锁条件,如果 `localhost:10255/healthz`(kubelet 存活)或
+`localhost:10255/healthz/syncloop`(kubelet 就绪)未能在 40s 和 60s 内未返回 `ok`,则 kubeadm 会快速失败。
+
+<!--  
+kubeadm relies on the kubelet to pull the control plane images and run them properly as static Pods.
+After the control plane is up, kubeadm completes the tasks described in following paragraphs.
+-->
+kubeadm 依靠 kubelet 拉取控制平面镜像并将其作为静态 Pod 正确运行。
+控制平面启动后,kubeadm 将完成以下段落中描述的任务。
+
+<!-- ### (optional) Write base kubelet configuration -->
+### (可选)编写基本 kubelet 配置  {#write-base-kubelet-configuration}
+
+{{< feature-state for_k8s_version="v1.9" state="alpha" >}}
+
+<!-- If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig`: -->
+如果带 `--feature-gates=DynamicKubeletConfig` 参数调用 kubeadm:
+
+<!--  
+1. Write the kubelet base configuration into the `kubelet-base-config-v1.9` ConfigMap in the `kube-system` namespace
+2. Creates RBAC rules for granting read access to that ConfigMap to all bootstrap tokens and all kubelet instances
+   (that is `system:bootstrappers:kubeadm:default-node-token` and `system:nodes` groups)
+3. Enable the dynamic kubelet configuration feature for the initial control-plane node by pointing `Node.spec.configSource` to the newly-created ConfigMap
+-->
+1. 将 kubelet 基本配置写入 `kube-system` 命名空间的 `kubelet-base-config-v1.9` ConfigMap 中。
+2. 创建 RBAC 规则,以授予对所有引导令牌和所有 kubelet 实例对该 ConfigMap 的读取访问权限
+  (即 `system:bootstrappers:kubeadm:default-node-token` 组和 `system:nodes` 组)
+3. 通过将 `Node.spec.configSource` 指向新创建的 ConfigMap,为初始控制平面节点启用动态 kubelet 配置功能。
+
+<!-- ### Save the kubeadm ClusterConfiguration in a ConfigMap for later reference -->
+### 将 kubeadm ClusterConfiguration 保存在 ConfigMap 中以供以后参考  {#save-the-kubeadm-clusterConfiguration-in-a-configMap-for-later-reference}
+
+<!-- 
+kubeadm saves the configuration passed to `kubeadm init` in a ConfigMap named `kubeadm-config` under `kube-system` namespace. 
+-->
+kubeadm 将传递给 `kubeadm init` 的配置保存在 `kube-system` 命名空间下名为 `kubeadm-config` 的 ConfigMap 中。
+
+<!--  
+This will ensure that kubeadm actions executed in future (e.g `kubeadm upgrade`) will be able to determine the actual/current cluster
+state and make new decisions based on that data.
+-->
+这将确保将来执行的 kubeadm 操作(例如 `kubeadm upgrade`)将能够确定实际/当前集群状态,并根据该数据做出新的决策。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. Before saving the ClusterConfiguration, sensitive information like the token is stripped from the configuration
+2. Upload of master configuration can be invoked individually with the [`kubeadm init phase upload-config`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) command
+-->
+1. 在保存 ClusterConfiguration 之前,从配置中删除令牌等敏感信息。
+2. 可以使用 [`kubeadm init phase upload-config`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) 
+   命令单独上传主控节点配置。
+
+<!-- ### Mark the node as control-plane -->
+### 将节点标记为控制平面  {#mark-the-node-as-control-plane}
+
+<!-- As soon as the control plane is available, kubeadm executes following actions: -->
+一旦控制平面可用,kubeadm 将执行以下操作:
+
+<!-- 
+- Labels the node as control-plane with `node-role.kubernetes.io/master=""`
+- Taints the node with `node-role.kubernetes.io/master:NoSchedule`
+-->
+- 给节点打上 `node-role.kubernetes.io/master=""` 标签,标记为控制平面
+- 给节点打上 `node-role.kubernetes.io/master:NoSchedule` 污点
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. Mark control-plane phase can be invoked individually with the [`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-master) command
+-->
+1. 可以使用 [`kubeadm init phase mark-control-plane`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-master) 
+  命令单独触发控制平面标记
+
+<!-- ### Configure TLS-Bootstrapping for node joining -->c
+### 为即将加入的节点加入 TLS 启动引导  {#configure-tls-bootstrapping-for-node-joining}
+
+<!--
+Kubeadm uses [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) for joining new nodes to an
+existing cluster; for more details see also [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md).
+-->
+
+Kubeadm 使用[引导令牌认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)将新节点连接到现有集群;
+有关更多详细信息,请参见[设计方案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)。
+
+<!-- 
+`kubeadm init` ensures that everything is properly configured for this process, and this includes following steps as well as
+setting API server and controller flags as already described in previous paragraphs.
+-->
+`kubeadm init` 确保为该过程正确配置了所有内容,这包括以下步骤以及设置 API 服务器和控制器标志,如前几段所述。
+
+<!-- Please note that: -->
+请注意:
+
+<!-- 
+1. TLS bootstrapping for nodes can be configured with the [`kubeadm init phase bootstrap-token`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token)
+   command, executing all the configuration steps described in following paragraphs; alternatively, each step can be invoked individually
+-->
+1. 可以使用 [`kubeadm init phase bootstrap-token`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token) 
+   命令配置节点的 TLS 引导,执行以下段落中描述的所有配置步骤;或者每个步骤都ukey单独触发。
   命令配置节点的 TLS 引导,执行以下段落中描述的所有配置步骤;或者每个步骤都单独触发。

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-scheduler`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+
+<!-- 
+Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file.
+The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster.
+The embedded client certificate for admin should be in the `system:masters` organization, as defined by default
+[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a
+CN. Kubeadm uses the `kubernetes-admin` CN.
+-->
+另外,一个用于 kubeadm 本身和 admin 的 kubeconfig 文件也被生成并保存到 `/etc/kubernetes/admin.conf` 文件中。
+此处的 admin 定义为正在管理集群并希望完全控制集群(**root**)的实际人员。
+内嵌的 admin 客户端证书应s  `system:masters` 组织的成员,
+这是由默认的 [RBAC 面向用户的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#user-facing-roles)定义的。 
+它还应包括一个 CN。 Kubeadm 使用 `kubernetes-admin` CN。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. `ca.crt` certificate is embedded in all the kubeconfig files.
+2. If a given kubeconfig file exists, and its content is evaluated compliant with the above specs, the existing file will be used and the generation phase for the given kubeconfig skipped
+3. If kubeadm is running in [ExternalCA mode](/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode), all the required kubeconfig must be provided by the user as well, because kubeadm cannot generate any of them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, kubeconfig files are written in a temporary folder
+5. Kubeconfig files generation can be invoked individually with the [`kubeadm init phase kubeconfig all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig) command
+-->
+1. `ca.crt` 证书内嵌在所有 kubeconfig 文件中。
+2. 如果给定的 kubeconfig 文件存在且其内容经过评估符合上述规范,则 kubeadm 将使用现有文件,并跳过给定 kubeconfig 的生成阶段
+3. 如果 kubeadm 以 [ExternalCA 模式](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode)运行,
+   则所有必需的 kubeconfig 也必须由用户提供,因为 kubeadm 不能自己生成
+4. 如果在 `--dry-run` 模式下执行 kubeadm,则 kubeconfig 文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase kubeconfig all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig)
+   命令分别生成 Kubeconfig 文件。
+
+<!-- ### Generate static Pod manifests for control plane components -->
+### 为控制平面组件生成静态 Pod 清单  {#generate-static-pod-manifests-for-control-plane-components}
+
+<!--  
+Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`. The kubelet watches this directory for Pods to create on startup.
+-->
+Kubeadm 将用于控制平面组件的静态 Pod 清单文件写入 `/etc/kubernetes/manifests` 目录。
+Kubelet 启动后会监视这个目录以便创建 Pod。
+
+<!-- Static Pod manifest share a set of common properties: -->
+静态 Pod 清单有一些共同的属性:
+
+<!--  
+- All static Pods are deployed on `kube-system` namespace
+- All static Pods get `tier:control-plane` and `component:{component-name}` labels
+- All static Pods use the `system-node-critical` priority class
+- `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is configured; as a consequence:
+  * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1`
+  * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379`
+- Leader election is enabled for both the controller-manager and the scheduler
+- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
+- All static Pods get any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
+- All static Pods get any extra Volumes specified by the user (Host path)
+-->
+- 所有静态 Pod 都部署在 `kube-system` 命名空间
+- 所有静态 Pod 都获得 `tier:ontrol-plane` 和 `component:{component-name}` 标签
- 所有静态 Pod 都打上 `tier:control-plane` 和 `component:{component-name}` 标签

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-scheduler`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+
+<!-- 
+Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file.
+The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster.
+The embedded client certificate for admin should be in the `system:masters` organization, as defined by default
+[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a
+CN. Kubeadm uses the `kubernetes-admin` CN.
+-->
+另外,一个用于 kubeadm 本身和 admin 的 kubeconfig 文件也被生成并保存到 `/etc/kubernetes/admin.conf` 文件中。
+此处的 admin 定义为正在管理集群并希望完全控制集群(**root**)的实际人员。
+内嵌的 admin 客户端证书应s  `system:masters` 组织的成员,
+这是由默认的 [RBAC 面向用户的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#user-facing-roles)定义的。 
+它还应包括一个 CN。 Kubeadm 使用 `kubernetes-admin` CN。
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. `ca.crt` certificate is embedded in all the kubeconfig files.
+2. If a given kubeconfig file exists, and its content is evaluated compliant with the above specs, the existing file will be used and the generation phase for the given kubeconfig skipped
+3. If kubeadm is running in [ExternalCA mode](/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode), all the required kubeconfig must be provided by the user as well, because kubeadm cannot generate any of them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, kubeconfig files are written in a temporary folder
+5. Kubeconfig files generation can be invoked individually with the [`kubeadm init phase kubeconfig all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig) command
+-->
+1. `ca.crt` 证书内嵌在所有 kubeconfig 文件中。
+2. 如果给定的 kubeconfig 文件存在且其内容经过评估符合上述规范,则 kubeadm 将使用现有文件,并跳过给定 kubeconfig 的生成阶段
+3. 如果 kubeadm 以 [ExternalCA 模式](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#external-ca-mode)运行,
+   则所有必需的 kubeconfig 也必须由用户提供,因为 kubeadm 不能自己生成
+4. 如果在 `--dry-run` 模式下执行 kubeadm,则 kubeconfig 文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase kubeconfig all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig)
+   命令分别生成 Kubeconfig 文件。
+
+<!-- ### Generate static Pod manifests for control plane components -->
+### 为控制平面组件生成静态 Pod 清单  {#generate-static-pod-manifests-for-control-plane-components}
+
+<!--  
+Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`. The kubelet watches this directory for Pods to create on startup.
+-->
+Kubeadm 将用于控制平面组件的静态 Pod 清单文件写入 `/etc/kubernetes/manifests` 目录。
+Kubelet 启动后会监视这个目录以便创建 Pod。
+
+<!-- Static Pod manifest share a set of common properties: -->
+静态 Pod 清单有一些共同的属性:
+
+<!--  
+- All static Pods are deployed on `kube-system` namespace
+- All static Pods get `tier:control-plane` and `component:{component-name}` labels
+- All static Pods use the `system-node-critical` priority class
+- `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is configured; as a consequence:
+  * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1`
+  * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379`
+- Leader election is enabled for both the controller-manager and the scheduler
+- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
+- All static Pods get any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
+- All static Pods get any extra Volumes specified by the user (Host path)
+-->
+- 所有静态 Pod 都部署在 `kube-system` 命名空间
+- 所有静态 Pod 都获得 `tier:ontrol-plane` 和 `component:{component-name}` 标签
+- 所有静态 Pod 均使用 `system-node-critical` 优先级
+- 所有静态 Pod 都设置了 `hostNetwork:true`,使得控制平面在配置网络之前启动;结果导致:
+   * 控制器管理器和调度器用来调用 API 服务器的地址为 127.0.0.1。
+   * 如果使用本地 etcd 服务器,则 `etcd-servers` 地址将设置为 `127.0.0.1:2379`
+- 同时为控制器管理器和调度器启用了领导者选举
+- 控制器管理器和调度器将引用 kubeconfig 文件及其各自的唯一标识
+- 如[将自定义参数传递给控制平面组件](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)中所述,
+  所有静态 Pod 都会获得用户指定的额外标志
+- 所有静态 Pod 都会获得用户指定的额外卷(主机路径)
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. All images will be pulled from k8s.gcr.io by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository
+2. In case of kubeadm is executed in the `--dry-run` mode, static Pods files are written in a temporary folder
+3. Static Pod manifest generation for master components can be invoked individually with the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command
+-->
+1. 所有镜像默认从 k8s.gcr.io 拉取。 
+   关于自定义镜像仓库,请参阅[使用自定义镜像](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
+2. 如果在 `--dry-run` 模式下执行 kubeadm,则静态 Pod 文件写入一个临时文件夹中
+3. 可以使用 [`kubeadm init phase control-plane all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) 
+   命令分别生成主控组件的静态 Pod 清单。
+
+<!-- #### API server -->
+#### API 服务器  {#api-server}
+
+<!-- 
+The static Pod manifest for the API server is affected by following parameters provided by the users: 
+-->
+API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
+
+<!--  
+ - The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those value defaults to the IP address of
+   the default network interface on the machine and port 6443
+ - The `service-cluster-ip-range` to use for services
+ - If an external etcd server is specified, the `etcd-servers` address and related TLS settings (`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`);
+   if an external etcd server is not be provided, a local etcd will be used (via host network)
+ - If a cloud provider is specified, the corresponding `--cloud-provider` is configured, together with the  `--cloud-config` path
+   if such file exists (this is experimental, alpha and will be removed in a future version)
+-->
+- 要绑定的 `apiserver-advertise-address` 和 `apiserver-bind-port`;如果未提供,则这些值默认为机器上默认网络接口的 IP 地址和 6443 端口。
+  - `service-cluster-ip-range` 给 service 使用
+  - 如果指定了外部 etcd 服务器,则应指定 `etcd-servers` 地址和相关的 TLS 设置(`etcd-cafile`,`etcd-certfile`,`etcd-keyfile`);
+    如果未提供外部 etcd 服务器,则将使用本地 etcd(通过主机网络)
+  - 如果指定了云提供商,则配置相应的 `--cloud-provider`,如果该路径存在,则配置 `--cloud-config`
+    (这是实验性的,是 Alpha 版本,将在以后的版本中删除)
+
+<!-- Other API server flags that are set unconditionally are: -->
+无条件设置的其他 API 服务器标志有:
+
+<!--  
+ - `--insecure-port=0` to avoid insecure connections to the api server
+ - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module.
+   See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details
+ - `--allow-privileged` to `true` (required e.g. by kube proxy)
+ - `--requestheader-client-ca-file` to `front-proxy-ca.crt`
+ - `--enable-admission-plugins` to:
+    - [`NamespaceLifecycle`](/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) e.g. to avoid deletion of
+      system reserved namespaces
+    - [`LimitRanger`](/docs/reference/access-authn-authz/admission-controllers/#limitranger) and [`ResourceQuota`](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) to enforce limits on namespaces
+    - [`ServiceAccount`](/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) to enforce service account automation
+    - [`PersistentVolumeLabel`](/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) attaches region or zone labels to
+      PersistentVolumes as defined by the cloud provider (This admission controller is deprecated and will be removed in a future version.
+      It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into using `gce` or `aws` as cloud providers)
+    - [`DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) to enforce default storage class on `PersistentVolumeClaim` objects
+    - [`DefaultTolerationSeconds`](/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
+    - [`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) to limit what a kubelet can modify
+      (e.g. only pods on this node)
+ - `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl logs` and other API server-kubelet
+   communication work in environments where the hostnames of the nodes aren't resolvable
+ - Flags for using certificates generated in previous steps:
+    - `--client-ca-file` to `ca.crt`
+    - `--tls-cert-file` to `apiserver.crt`
+    - `--tls-private-key-file` to `apiserver.key`
+    - `--kubelet-client-certificate` to `apiserver-kubelet-client.crt`
+    - `--kubelet-client-key` to `apiserver-kubelet-client.key`
+    - `--service-account-key-file` to `sa.pub`
+    - `--requestheader-client-ca-file` to`front-proxy-ca.crt`
+    - `--proxy-client-cert-file` to `front-proxy-client.crt`
+    - `--proxy-client-key-file` to `front-proxy-client.key`
+ - Other flags for securing the front proxy ([API Aggregation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)) communications:
+    - `--requestheader-username-headers=X-Remote-User`
+    - `--requestheader-group-headers=X-Remote-Group`
+    - `--requestheader-extra-headers-prefix=X-Remote-Extra-`
+    - `--requestheader-allowed-names=front-proxy-client`
+-->
+ - `--insecure-port=0` 禁止到 API 服务器不安全的连接
+ - `--enable-bootstrap-token-auth=true` 启用 `BootstrapTokenAuthenticator` 身份验证模块
+   更多细节请参见 [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
+ - `--allow-privileged` 设为 `true`(必要,例如kube-proxy)
 - `--allow-privileged` 设为 `true`(必要,例如 kube-proxy)

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-scheduler`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+
+<!-- 
+Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file.
+The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster.
+The embedded client certificate for admin should be in the `system:masters` organization, as defined by default
+[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a
+CN. Kubeadm uses the `kubernetes-admin` CN.
+-->
+另外,一个用于 kubeadm 本身和 admin 的 kubeconfig 文件也被生成并保存到 `/etc/kubernetes/admin.conf` 文件中。
+此处的 admin 定义为正在管理集群并希望完全控制集群(**root**)的实际人员。
+内嵌的 admin 客户端证书应s  `system:masters` 组织的成员,
内嵌的 admin 客户端证书属于 `system:masters` 组织的成员,

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
- 控制器管理器的 kubeconfig 文件 - `/etc/kubernetes/controller-manager.conf`;

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
+    - 根据[节点鉴权](/zh/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
+    - 具有通用名称(CN):`system:node:<hostname-lowercased>`
+- 控制器管理器的 kubeconfig 文件——`/etc/kubernetes/controller-manager.conf`;
+  在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
+  此客户端证书应具有 CN:`system:kube-controller-manager`,
+  这是由 [RBAC 核心组件角色](/zh/docs/reference/access-authn-authz/rbac/#core-component-roles)默认定义的。
+- 调度器的 kubeconfig 文件——`/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。
- 调度器的 kubeconfig 文件 - `/etc/kubernetes/scheduler.conf`;在此文件中嵌入了具有调度器身份标识的客户端证书。

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
- 如果授权方式为 Webhook

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
+  有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
+  此客户端证书应:
  此客户端证书应为:

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
+  - [错误] 如果指定了 etcd 证书或密钥,但无法找到
+- 如果未提供外部 etcd(因此将安装本地 etcd):
+  - [错误] 如果端口 2379 已被占用
+  - [错误] 如果 Etcd.DataDir 文件夹已经存在并且不为空
+- 如果授权模式为 ABAC:
+  - [错误] 如果 abac_policy.json 不存在
+- 如果授权方式为 WebHook
+  - [错误] 如果 webhook_authz.conf 不存在
+
+<!-- Please note that: -->
+请注意:
+
+<!--  
+1. Preflight checks can be invoked individually with the [`kubeadm init phase preflight`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) command
+-->
+1. 可以使用 [`kubeadm init phase preflight`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-preflight) 命令单独触发预检。
+
+
+<!-- ### Generate the necessary certificates -->
+### 生成必要的证书  {#generate-the-necessary-certificate}
+
+<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
+Kubeadm 生成用于不同目的的证书和私钥对:
+
+ <!-- 
+ - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and `ca.key` private key file 
+ - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into `apiserver.crt` file with
+   its private key `apiserver.key`. This certificate should contain following alternative names:
+     - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g. `10.96.0.1` if service subnet is `10.96.0.0/12`)
+     - Kubernetes DNS names, e.g.  `kubernetes.default.svc.cluster.local` if `--service-dns-domain` flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`, `kubernetes.default`, `kubernetes`
+     - The node-name
+     - The `--apiserver-advertise-address`
+     - Additional alternative names specified by the user
+ - A client certificate for the API server to connect to the kubelets securely, generated using `ca.crt` as the CA and saved into
+   `apiserver-kubelet-client.crt` file with its private key `apiserver-kubelet-client.key`.
+   This certificate should be in the `system:masters` organization
+ - A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
+ - A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key `front-proxy-ca.key`
+ - A client cert for the front proxy client, generated using `front-proxy-ca.crt` as the CA and saved into `front-proxy-client.crt` file
+   with its private key`front-proxy-client.key`
+-->
+ - Kubernetes 集群的自签名证书颁发机构已保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
+ - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` 文件中,私钥保存到 `apiserver.key` 文件中
+   该证书应包含以下备用名称:
+    - Kubernetes 服务的内部 clusterIP(服务 CIDR 的第一个地址,例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`)
+    - Kubernetes DNS 名称,例如:如果 `--service-dns-domain` 标志值是 `cluster.local`,则为 `kubernetes.default.svc.cluster.local`;
+      加上默认的 DNS 名称 `kubernetes.default.svc`、`kubernetes.default` 和 `kubernetes`,
+    - 节点名称
+    - `--apiserver-advertise-address`
+    - 用户指定的其他备用名称 
+  - API 服务器用于安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,并保存到 `apiserver-kubelet-client.key`, 
+    私钥保存到 `apiserver-kubelet-client.crt` 文件中。该证书应该在 `system:masters` 组织中
+  - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中
+  - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 `front-proxy-ca.key` 文件中
+  - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 `front-proxy-client.crt` 文件中,
+    私钥保存到 `front-proxy-client.key` 文件中
+
+<!-- 
+Certificates are stored by default in `/etc/kubernetes/pki`, but this directory is configurable using the `--cert-dir` flag. 
+-->
+证书默认情况下存储在 `/etc/kubernetes/pki` 中,但是该目录可以使用 `--cert-dir` 标志进行配置。
+
+ <!-- Please note that: -->
+ 请注意:
+
+<!-- 
+1. If a given certificate and private key pair both exist, and its content is evaluated compliant with the above specs, the existing files will
+   be used and the generation phase for the given certificate skipped. This means the user can, for example, copy an existing CA to
+   `/etc/kubernetes/pki/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.
+   See also [using custom certificates](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
+   already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
+   controller-manager won't be started
+3. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
+   all the certificates must be provided by the user, because kubeadm cannot generate them by itself
+4. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
+5. Certificate generation can be invoked individually with the [`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
+-->
+1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,并且跳过给定证书的生成阶段。
+  这意味着用户可以将现有的 CA 复制到 `/etc/kubernetes/pki/ca.{crt,key}`,kubeadm 将使用这些文件对其余证书进行签名。
+  请参阅[使用自定义证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#custom-certificates)
+2. 仅对 CA 来说,如果所有其他证书和 kubeconfig 文件都已就位,则可以只提供 `ca.crt` 文件,而不提供 `ca.key` 文件。
+   kubeadm 已经识别出这种情况并启用 ExternalCA,这也意味着了控制器管理器中的 `csrsigner` 控制器将不会启动
+3. 如果 kubeadm 在[外部 CA 模式](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)下运行;
+   所有证书必须由用户提供,因为 kubeadm 无法自行生成它们
+4. 如果在 `--dry-run` 模式下执行 kubeadm,证书文件将写入一个临时文件夹中
+5. 可以使用 [`kubeadm init phase certs all`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 
+   命令单独生成证书。
+
+<!-- ### Generate kubeconfig files for control plane components -->
+### 为控制平面组件生成 kubeconfig 文件  {#generate-kubeconfig-files-for-control-plane-components}
+
+<!-- 
+Kubeadm generates kubeconfig files with identities for control plane components:
+-->
+Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
+
+<!--  
+- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster.
+  This client cert should:
+    - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module
+    - Have the Common Name (CN) `system:node:<hostname-lowercased>`
+- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this file is embedded a client
+  certificate with controller-manager identity. This client cert should have the CN `system:kube-controller-manager`, as defined
+by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity.
+  This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
+-->
+- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,
- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件 - `/etc/kubernetes/bootstrap-kubelet.conf`。在此文件中,

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
+- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
+- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
+- [错误] 如果建议地址是 ipv6,并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
+- [错误] 如果启用了交换分区
+- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
+- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
+- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
+- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
+- [警告] 如果服务子网的连接通过代理(仅检查第一个地址)
+- [警告] 如果 Pod 子网的连接通过代理(仅检查第一个地址)
+<!-- 
+- If external etcd is provided:
+  - [Error] if etcd version is older than the minimum required version
+  - [Error] if etcd certificates or keys are specified, but not provided
+- If external etcd is NOT provided (and thus local etcd will be installed):
+  - [Error] if ports 2379 is used
+  - [Error] if Etcd.DataDir folder already exists and it is not empty
+- If authorization mode is ABAC:
+  - [Error] if abac_policy.json does not exist
+- If authorization mode is WebHook
+  - [Error] if webhook_authz.conf does not exist
+-->
+- 如果提供了外部 etcd:
+  - [错误] 如果 etcd 版本早于最低要求版本
  - [错误] 如果 etcd 版本低于最低要求版本

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态
+- [错误] 如果使用 API ​​服务器绑定的端口或 10250/10251/10252 端口
- [错误] 如果 API ​​服务器绑定的端口或 10250/10251/10252 端口已被占用

howieyuen

comment created time in 5 days

Pull request review commentkubernetes/website

[zh] translate /docs/reference/setup-tools/kubeadm/implementation-detail

+---
+title: 实现细节
+content_type: concept
+weight: 100
+---
+<!--  
+---
+reviewers:
+- luxas
+- jbeda
+title: Implementation details
+content_type: concept
+weight: 100
+---
+-->
+<!-- overview -->
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+<!--  
+`kubeadm init` and `kubeadm join` together provides a nice user experience for creating a best-practice but bare Kubernetes cluster from scratch.
+However, it might not be obvious _how_ kubeadm does that.
+-->
+`kubeadm init` 和 `kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
+但是,kubeadm _如何_ 做到这一点可能并不明显。
+
+<!-- 
+This document provides additional details on what happen under the hood, 
+with the aim of sharing knowledge on Kubernetes cluster best practices. 
+-->
+本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
+
+<!-- body -->
+<!-- ## Core design principles -->
+## 核心设计原则    {#core-design-principles}
+
+<!-- The cluster that `kubeadm init` and `kubeadm join` set up should be: -->
+`kubeadm init` 和 `kubeadm join` 设置的集群应为:
+
+<!-- 
+ - **Secure**: It should adopt latest best-practices like:
+   - enforcing RBAC
+   - using the Node Authorizer
+   - using secure communication between the control plane components
+   - using secure communication between the API server and the kubelets
+   - lock-down the kubelet API
+   - locking down access to the API for system components like the kube-proxy and CoreDNS
+   - locking down what a Bootstrap Token can access
+ - **Easy to use**: The user should not have to run anything more than a couple of commands:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **Extendable**:
+   - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
+   - It should provide the possibility to use a config file for customizing various parameters
+ -->
+ - **安全**:它应采用最新的最佳实践,例如:
+   - 应用 RBAC
+   - 使用节点鉴权机制(Node Authorizer)
+   - 在控制平面组件之间使用安全通信
+   - 在 API 服务器和 kubelet 之间使用安全通信
+   - 锁定 kubelet API
+   - 锁定对系统组件(例如 kube-proxy 和 CoreDNS)的 API 的访问
+   - 锁定启动引导令牌(Bootstrap Token)可以访问的内容
+ - **易用**:用户只需要运行几个命令即可:
+   - `kubeadm init`
+   - `export KUBECONFIG=/etc/kubernetes/admin.conf`
+   - `kubectl apply -f <network-of-choice.yaml>`
+   - `kubeadm join --token <token> <master-ip>:<master-port>`
+ - **可扩展**:
+   - _不_ 应偏向任何特定的网络提供商。不涉及配置集群网络
+   - 应该可以使用配置文件来自定义各种参数
+
+<!-- ## Constants and well-known values and paths -->
+## 常量以及众所周知的值和路径  {#constants-and-well-known-values-and-paths}
+
+<!-- 
+In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a
+limited set of constant values for well-known paths and file names.
+-->
+为了降低复杂性并简化基于 kubeadm 的高级工具的开发,对于众所周知的路径和文件名,它使用了一组有限的常量值。
+
+<!--  
+The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path
+in a majority of cases, and the most intuitive location; other constants paths and file names are:
+-->
+Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,因为在大多数情况下它显然是给定的路径,并且是最直观的位置;
+其他路径常量和文件名有:
+
+<!--  
+- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
+    - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` for the cluster admin and kubeadm itself
+- Names of certificates and key files :
+    - `ca.crt`, `ca.key` for the Kubernetes certificate authority
+    - `apiserver.crt`, `apiserver.key` for the API server certificate
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used by the API server to connect to the kubelets securely
+    - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
+    - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
+-->
+- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
+    - `etcd.yaml`
+    - `kube-apiserver.yaml`
+    - `kube-controller-manager.yaml`
+    - `kube-scheduler.yaml`
+- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
+    - `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf` )
+    - `controller-manager.conf`
+    - `scheduler.conf`
+    - `admin.conf` 用于集群管理员和 kubeadm 本身
+- 证书和密钥文件的名称:
+    - `ca.crt`, `ca.key` 用于 Kubernetes 证书颁发机构
+    - `apiserver.crt`, `apiserver.key` 用于 API 服务器证书
+    - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` 用于 API 服务器安全地连接到 kubelet 的客户端证书
+    - `sa.pub`, `sa.key` 用于签署 ServiceAccount 时 控制器管理器使用的密钥
+    - `front-proxy-ca.crt`, `front-proxy-ca.key` 用于前端代理证书颁发机构
+    - `front-proxy-client.crt`, `front-proxy-client.key` 用于前端代理客户端
+
+<!-- ## kubeadm init workflow internal design -->
+## kubeadm init 工作流程内部设计  {#kubeadm-init-workflow-internal-design}
+
+<!--  
+The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform,
+as described in `kubeadm init`.
+-->
+`kubeadm init` [内部工作流程](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow)包含一系列要执行的原子工作任务,
+如 `kubeadm init` 中所述。
+
+<!--  
+The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters.
+-->
+[`kubeadm init phase`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) 命令允许用户分别调用每个任务,
+并最终提供可重用且可组合的 API 或工具箱,其他 Kubernetes 引导工具、任何 IT 自动化工具和高级用户都可以使用它用来创建的自定义集群。
+
+<!-- ### Preflight checks -->
+### 预检  {#preflight-checks}
+
+<!-- 
+Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems.
+The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. 
+-->
+Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并避免常见的集群启动问题。
+用户可以使用 `--ignore-preflight-errors` 选项跳过特定的预检查或全部检查。
+
+<!--  
+- [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version.
+- Kubernetes system requirements:
+  - if running on linux:
+    - [error] if Kernel is older than the minimum required version
+    - [error] if required cgroups subsystem aren't in set up
+  - if using docker:
+    - [warning/error] if Docker service does not exist, if it is disabled, if it is not active.
+    - [error] if Docker endpoint does not exist or does not work
+    - [warning] if docker version is not in the list of validated docker versions
+  - If using other cri engine:
+    - [error] if crictl socket does not answer
+-->
+- [警告] 如果要使用的 Kubernetes 版本(由 `--kubernetes-version` 标志指定)比 kubeadm CLI 版本至少高一个小版本。
+- Kubernetes 系统要求:
+  - 如果在 linux上运行:
+    - [错误] 如果内核早于最低要求的版本
+    - [错误] 如果未设置所需的 cgroups 子系统
+  - 如果使用 docker:
+    - [警告/错误] 如果 Docker 服务不存在、被禁用或未激活。
+    - [错误] 如果 Docker 端点不存在或不起作用
+    - [警告] 如果 docker 版本不在经过验证的 docker 版本列表中
+  - 如果使用其他 cri 引擎:
+    - [错误] 如果 crictl 套接字未应答
+<!--  
+- [error] if user is not root
+- [error] if the machine hostname is not a valid DNS subdomain
+- [warning] if the host name cannot be reached via network lookup
+- [error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)
+- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
+- [warning] if kubelet service does not exist or if it is disabled
+- [warning] if firewalld is active
+- [error] if API server bindPort or ports 10250/10251/10252 are used
+- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
+- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
+- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
+- [Error] if swap is on
+- [Error] if `conntrack`, `ip`, `iptables`,  `mount`, `nsenter` commands are not present in the command path
+- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
+- [warning] if extra arg flags for API server, controller manager,  scheduler contains some invalid options
+- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
+- [warning] if connection to services subnet goes through proxy (only first address checked)
+- [warning] if connection to Pods subnet goes through proxy (only first address checked)
+-->
+- [错误] 如果用户不是 root 用户
+- [错误] 如果机器主机名不是有效的 DNS 子域
+- [警告] 如果通过网络查找无法访问主机名
+- [错误] 如果 kubelet 版本低于 kubeadm 支持的最低 kubelet 版本(当前小版本 -1)
+- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小(不支持的版本偏斜)
+- [警告] 如果 kubelet 服务不存在或已被禁用
+- [警告] 如果 firewalld 处于活动状态

如果防火墙处于开启状态 防火墙一般是开启、关闭之说。

howieyuen

comment created time in 5 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

issue closedistio/istio

How to set the pilot-discovery parameter when you install istio 1.6.0 cluster?

How to change the default parameter values for pilot-discovery when installing istio 1.6.0 cluster?

see https://istio.io/v1.6/docs/reference/commands/pilot-discovery/.

Can the parameters be set like this for pilot-discovery?

istioctl install --set values.global.controlPlaneSecurityEnabled=true

closed time in 6 days

tanjunchen

pull request commentkubernetes/dashboard

update chinese translation

/lgtm

hwdef

comment created time in 7 days

Pull request review commentkubernetes/dashboard

update chinese translation

         <x id="START_TAG_MAT-ICON" ctype="x-mat-icon" equiv-text="&lt;mat-icon>"/>open_in_new<x id="CLOSE_TAG_MAT-ICON" ctype="x-mat-icon" equiv-text="&lt;/mat-icon>"/>       <x id="CLOSE_LINK" ctype="x-a" equiv-text="&lt;/a>"/> to learn more.     </source>-        <target state="new">-      You can <x id="START_LINK" ctype="x-a" equiv-text="&lt;a>"/>deploy a containerized app<x id="CLOSE_LINK" ctype="x-a" equiv-text="&lt;/a>"/>, select other namespace or-      <x id="START_LINK_1" ctype="x-a" equiv-text="&lt;a>"/>take the Dashboard Tour+        <target>+      你能 <x id="START_LINK" ctype="x-a" equiv-text="&lt;a>"/>部署一个容器化应用<x id="CLOSE_LINK" ctype="x-a" equiv-text="&lt;/a>"/>, 选择其他 namespace,或者+      <x id="START_LINK_1" ctype="x-a" equiv-text="&lt;a>"/>阅读 Dashboard Tour

Tour -> 教程 或者 说明?

hwdef

comment created time in 7 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentkubernetes/website

update traslation: safely-drain-node

 For a given eviction request, there are two cases: - 至少匹配一个预算。在这种情况下,上述三种回答中的任何一种都可能适用。  <!-- -In some cases, an application may reach a broken state where it will never return anything-other than 429 or 500. This can happen, for example, if the replacement pod created by the-application's controller does not become ready, or if the last pod evicted has a very long-termination grace period.+## Stuck evictions++In some cases, an application may reach a broken state, one where unless you intervene the+eviction API will never return anything other than 429 or 500.++For example: this can happen if ReplicaSet is creating Pods for your application but+the replacement Pods do not become `Ready`. You can also see similar symptoms if the+last Pod evicted has a very long termination grace period.  In this case, there are two potential solutions: -- Abort or pause the automated operation. Investigate the reason for the stuck application, and restart the automation.-- After a suitably long wait, `DELETE` the pod instead of using the eviction API.+- Abort or pause the automated operation. Investigate the reason for the stuck application,

could you sync there changes ?

lianghao208

comment created time in 8 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentkubernetes/website

fix typo "跟据" to "根据"

/check-cla /retest

shizhyy

comment created time in 8 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

startedzq2599/blog_demos

started time in 9 days

issue openedistio/istio

How to set the pilot-discovery parameter when you install istio 1.6.0 cluster?

How to change the default parameter values for pilot-discovery when installing istio 1.6.0 cluster?

see https://istio.io/v1.6/docs/reference/commands/pilot-discovery/.

Can the parameters be set like this for pilot-discovery?

istioctl install --set values.global.controlPlaneSecurityEnabled=true

created time in 10 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

issue commentkubernetes/kubernetes

client-go DeleteCollection doesn't work as expected.

/sig api-machinery

tanjunchen

comment created time in 11 days

issue commentkubernetes/kubernetes

client-go DeleteCollection doesn't work as expected.

sig/api-machinery

tanjunchen

comment created time in 11 days

issue openedkubernetes/kubernetes

client-go DeleteCollection doesn't work as expected.

<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!

If the matter is security related, please disclose it privately via https://kubernetes.io/security/ -->

What happened:

see https://github.com/istio/istio/issues/27910

client-go DeleteCollection doesn't work as expected.The function call returns normally without any error but it doesn't delete anything.

What you expected to happen:

I hope that deleting a non-existent resource will report an error. No error is reported when deleting non-existent resources. I don't know if it was designed like this on purpose. No error is reported, why is the return type error?

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

  • Cloud provider or hardware configuration:

  • OS (e.g: cat /etc/os-release):

  • Kernel (e.g. uname -a):

  • Install tools:

  • Network plugin and version (if this is a network-related bug):

  • Others:

created time in 11 days

issue commentistio/istio

istio DeleteCollection operation fails can't return error

@howardjohn
I am trying to delete a resource that does not exist, but no error is reported.

tanjunchen

comment created time in 11 days

create barnchtanjunchen/kubernetes

branch : schedule-dependency-20201013

created branch time in 11 days

issue commentistio/istio

istio DeleteCollection operation fails can't return error

see


// DeleteCollection deletes a collection of objects.
func (c *gateways) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
	var timeout time.Duration
	if listOpts.TimeoutSeconds != nil {
		timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
	}
	return c.client.Delete().
		Namespace(c.ns).
		Resource("gateways").
		VersionedParams(&listOpts, scheme.ParameterCodec).
		Timeout(timeout).
		Body(&opts).
		Do(ctx).
		Error()
}
tanjunchen

comment created time in 11 days

issue openedistio/istio

istio DeleteCollection operation fails can't return error

(NOTE: This is used to report product bugs: To report a security vulnerability, please visit https://istio.io/about/security-vulnerabilities/ To ask questions about how to use Istio, please visit https://discuss.istio.io )

Bug description

	clusterConfig, err := clientcmd.BuildConfigFromFlags("", Kubeconfig())
	if err != nil {
		fmt.Println(err)
	}
	istio, err := versionedclient.NewForConfig(clusterConfig)
	err = istio.NetworkingV1beta1().Gateways("test").DeleteCollection(context.TODO(), metav1.DeleteOptions{}, metav1.ListOptions{})
	fmt.Println("===>", err)

The DeleteCollection operation will not return an error regardless of whether the resource exists.

Affected product area (please put an X in all that apply)

[ ] Docs [ ] Installation [] Networking [ ] Performance and Scalability [ ] Extensions and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [x] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster [ ] Virtual Machine [ ] Multi Control Plane

Expected behavior

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version --short and helm version if you used Helm)

[root@test-10 xxx]# istioctl version --remote
client version: 1.6.0
control plane version: 1.6.0
data plane version: 1.6.0 (3 proxies)

How was Istio installed?

Environment where bug was observed (cloud vendor, OS, etc)

operator installation.

Additionally, please consider attaching a cluster state archive by attaching the dump file to this issue.

created time in 11 days

issue commentkubernetes/kubernetes

Make sub e2e framework independent as possible

/remove-lifecycle stale

oomichi

comment created time in 13 days

pull request commentkubernetes/website

Update cluster-intro.html: Fix Chinese typo

I signed it /retest /check-cla

purplemysticx

comment created time in 14 days

pull request commentkubernetes/website

Update cluster-intro.html: Fix Chinese typo

/ok-to-test

purplemysticx

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。

其他同理

CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。++```shell+ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \+  --private-key=~/.ssh/private_key+```+<!--+Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.+-->+大型部署(超过 100 个节点)可能需要[特定的调整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md),以获得最佳效果。++<!--+### (5/5) Verify the deployment++Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.+-->+### (5/5)验证部署++Kubespray 提供了一种使用 [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)+验证 Pod 间连接和 DNS 解析的方法。+Netchecker 确保 netchecker-agents pod 可以解析。+DNS 请求并在默认名称空间内对每个请求执行 ping 操作。+这些 Pods 模仿其余工作负载的类似行为,并用作群集运行状况指示器。+<!--+## Cluster operations++Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.+-->+## 集群操作++Kubespray 提供了其他 Playbooks 来管理集群: _scale_ 和 _upgrade_。+<!--+### Scale your cluster++You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".+You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".+-->+### 扩缩集群++你可以通过运行 scale playbook 向集群中添加工作节点。有关更多信息,+请参见“ [添加节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)”。+你可以通过运行 remove-node playbook 来从集群中删除工作节点。有关更多信息,+请参见“[删除节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)”。+<!--+### Upgrade your cluster++You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)".+-->+### 升级集群++你可以通过运行 upgrade-cluster Playbook 来升级群集。有关更多信息,请参见+“[升级](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)”。+<!--+## Cleanup++You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml).++{{< caution >}}+When running the reset playbook, be sure not to accidentally target your production cluster!+{{< /caution >}}+-->+## 清理++你可以通过[reset](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml) Playbook
你可以通过 [reset](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml) Playbook
CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。++```shell+ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \+  --private-key=~/.ssh/private_key+```+<!--+Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.+-->+大型部署(超过 100 个节点)可能需要[特定的调整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md),以获得最佳效果。++<!--+### (5/5) Verify the deployment++Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.+-->+### (5/5)验证部署++Kubespray 提供了一种使用 [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)+验证 Pod 间连接和 DNS 解析的方法。+Netchecker 确保 netchecker-agents pod 可以解析。+DNS 请求并在默认名称空间内对每个请求执行 ping 操作。+这些 Pods 模仿其余工作负载的类似行为,并用作群集运行状况指示器。+<!--+## Cluster operations++Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.+-->+## 集群操作++Kubespray 提供了其他 Playbooks 来管理集群: _scale_ 和 _upgrade_。+<!--+### Scale your cluster++You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".+You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".+-->+### 扩缩集群++你可以通过运行 scale playbook 向集群中添加工作节点。有关更多信息,+请参见“ [添加节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)”。+你可以通过运行 remove-node playbook 来从集群中删除工作节点。有关更多信息,+请参见“[删除节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)”。+<!--+### Upgrade your cluster++You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)".+-->+### 升级集群++你可以通过运行 upgrade-cluster Playbook 来升级群集。有关更多信息,请参见

1

CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。++```shell+ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \+  --private-key=~/.ssh/private_key+```+<!--+Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.+-->+大型部署(超过 100 个节点)可能需要[特定的调整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md),以获得最佳效果。++<!--+### (5/5) Verify the deployment++Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.+-->+### (5/5)验证部署++Kubespray 提供了一种使用 [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)+验证 Pod 间连接和 DNS 解析的方法。+Netchecker 确保 netchecker-agents pod 可以解析。+DNS 请求并在默认名称空间内对每个请求执行 ping 操作。+这些 Pods 模仿其余工作负载的类似行为,并用作群集运行状况指示器。+<!--+## Cluster operations++Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.+-->+## 集群操作++Kubespray 提供了其他 Playbooks 来管理集群: _scale_ 和 _upgrade_。+<!--+### Scale your cluster++You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".+You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".+-->+### 扩缩集群++你可以通过运行 scale playbook 向集群中添加工作节点。有关更多信息,+请参见“ [添加节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)”。+你可以通过运行 remove-node playbook 来从集群中删除工作节点。有关更多信息,+请参见“[删除节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)”。
请参见 “[删除节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)”。
CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。++```shell+ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \+  --private-key=~/.ssh/private_key+```+<!--+Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.+-->+大型部署(超过 100 个节点)可能需要[特定的调整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md),以获得最佳效果。++<!--+### (5/5) Verify the deployment++Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.+-->+### (5/5)验证部署++Kubespray 提供了一种使用 [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)+验证 Pod 间连接和 DNS 解析的方法。+Netchecker 确保 netchecker-agents pod 可以解析。+DNS 请求并在默认名称空间内对每个请求执行 ping 操作。+这些 Pods 模仿其余工作负载的类似行为,并用作群集运行状况指示器。+<!--+## Cluster operations++Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.+-->+## 集群操作++Kubespray 提供了其他 Playbooks 来管理集群: _scale_ 和 _upgrade_。+<!--+### Scale your cluster++You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".+You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".+-->+### 扩缩集群++你可以通过运行 scale playbook 向集群中添加工作节点。有关更多信息,+请参见“ [添加节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)”。
请参见 “[添加节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)”。
CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。++```shell+ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \+  --private-key=~/.ssh/private_key+```+<!--+Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.+-->+大型部署(超过 100 个节点)可能需要[特定的调整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md),以获得最佳效果。++<!--+### (5/5) Verify the deployment++Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.+-->+### (5/5)验证部署++Kubespray 提供了一种使用 [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)+验证 Pod 间连接和 DNS 解析的方法。+Netchecker 确保 netchecker-agents pod 可以解析。+DNS 请求并在默认名称空间内对每个请求执行 ping 操作。+这些 Pods 模仿其余工作负载的类似行为,并用作群集运行状况指示器。

1

CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:++* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:+* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)+* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)++<!--+### (2/5) Compose an inventory file++After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".++### (3/5) Plan your cluster deployment++Kubespray provides the ability to customize many aspects of the deployment:++-->+### (2/5)编写清单文件++设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。++### (3/5)规划集群部署++Kubespray 能够自定义部署的许多方面:++<!--+* Choice deployment mode: kubeadm or non-kubeadm+* CNI (networking) plugins+* DNS configuration+* Choice of control plane: native/binary or containerized+* Component versions+* Calico route reflectors+* Component runtime options+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* Certificate generation methods+-->+* 选择部署模式: kubeadm 或非 kubeadm+* CNI(网络)插件+* DNS 配置+* 控制平面的选择:本机/可执行文件或容器化+* 组件版本+* Calico 路由反射器+* 组件运行时选项+  * {{< glossary_tooltip term_id="docker" >}}+  * {{< glossary_tooltip term_id="containerd" >}}+  * {{< glossary_tooltip term_id="cri-o" >}}+* 证书生成方式++<!--+Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.+-->++可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。+如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。+<!--+### (4/5) Deploy a Cluster++Next, deploy your cluster:++Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).+-->++### (4/5)部署集群++接下来,部署你的集群:++使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行群集部署。
使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行j集群部署。
CriaHu

comment created time in 14 days

Pull request review commentkubernetes/website

translate docs/setup/production-environment/tools/kubespray.md

+---+title: 使用 Kubespray 安装 Kubernetes+content_type: concept+weight: 30+---+<!--+title: Installing Kubernetes with Kubespray+content_type: concept+weight: 30+-->++<!-- overview -->++<!--+This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).+-->+此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。++<!--+Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:+-->+Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:++<!--+* a highly available cluster+* composable attributes+* support for most popular Linux distributions+  * Ubuntu 16.04, 18.04, 20.04+  * CentOS/RHEL/Oracle Linux 7, 8+  * Debian Buster, Jessie, Stretch, Wheezy+  * Fedora 31, 32+  * Fedora CoreOS+  * openSUSE Leap 15+  * Flatcar Container Linux by Kinvolk+* continuous integration tests+-->+* 高可用性集群+* 可组合属性+* 支持大多数流行的 Linux 发行版+   * Ubuntu 16.04、18.04、20.04+   * CentOS / RHEL / Oracle Linux 7、8+   * Debian Buster,Jessie,Stretch,Wheezy+   * Fedora 31、32+   * Fedora CoreOS+   * openSUSE Leap 15+   * Kinvolk 的 Flatcar Container Linux+* 持续集成测试++<!--+To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to+[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).+-->+要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以+ [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。+<!-- body -->++<!--+## Creating a cluster++### (1/5) Meet the underlay requirements++-->++## 创建集群++### (1/5)满足下层设施要求++<!--+Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):+-->+按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:++<!--+* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**+* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**+* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* The target servers are configured to allow **IPv4 forwarding**+* **Your ssh key must be copied** to all the servers part of your inventory+* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall+* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified+-->+* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr+* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**+* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))+* 目标服务器配置为允许 IPv4 转发+* **你的 SSH 密钥必须复制**到清单中的所有服务器部分+* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙+* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”++<!--+Kubespray provides the following utilities to help provision your environment:++* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:+  * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)+  * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)+  * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)+-->+Kubespray提供以下实用程序来帮助你设置环境:
Kubespray 提供以下实用程序来帮助你设置环境:
CriaHu

comment created time in 14 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

pull request commentkubernetes/website

Update cluster-intro.html: Fix Chinese typo

I signed it /retest /check-cla

purplemysticx

comment created time in 14 days

Pull request review commentkubernetes/website

[zh] Sync changes to Hello Minikube page

 menu:     title: "Get Started"     weight: 10     post: >-      <p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.</p>+      <p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs a sample app.</p> card:   name: tutorials   weight: 10---- -->  <!-- overview -->  <!---This tutorial shows you how to run a simple Hello World Node.js app+This tutorial shows you how to run a sample app on Kubernetes using [Minikube](/docs/setup/learning-environment/minikube) and Katacoda. Katacoda provides a free, in-browser Kubernetes environment. -->-本教程向您展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。+本教程向你展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda+在 Kubernetes 上运行一个应用示例。Katacoda 提供免费的浏览器内 Kubernetes 环境。 -{{< note >}} <!-- You can also follow this tutorial if you've installed [Minikube locally](/docs/tasks/tools/install-minikube/). -->-如果您已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。-+{{< note >}}+如果你已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 {{< /note >}}  - ## {{% heading "objectives" %}} - <!-- * Deploy a hello world application to Minikube.
Deploy a sample application to Minikube.
Run the app.
View application logs.
tengqm

comment created time in 14 days

Pull request review commentkubernetes/website

[zh] Sync changes to Hello Minikube page

 menu:     title: "Get Started"     weight: 10     post: >-      <p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.</p>+      <p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs a sample app.</p> card:   name: tutorials   weight: 10---- -->  <!-- overview -->  <!---This tutorial shows you how to run a simple Hello World Node.js app+This tutorial shows you how to run a sample app on Kubernetes using [Minikube](/docs/setup/learning-environment/minikube) and Katacoda. Katacoda provides a free, in-browser Kubernetes environment. -->-本教程向您展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。+本教程向你展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda+在 Kubernetes 上运行一个应用示例。Katacoda 提供免费的浏览器内 Kubernetes 环境。 -{{< note >}} <!-- You can also follow this tutorial if you've installed [Minikube locally](/docs/tasks/tools/install-minikube/). -->-如果您已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。-+{{< note >}}+如果你已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 {{< /note >}}  - ## {{% heading "objectives" %}} - <!-- * Deploy a hello world application to Minikube.

Deploy a sample application to Minikube.

tengqm

comment created time in 14 days

Pull request review commentkubernetes/website

[zh] Sync changes to Hello Minikube page

 menu:     title: "Get Started"     weight: 10     post: >-      <p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.</p>+      <p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs a sample app.</p> card:   name: tutorials   weight: 10---- -->  <!-- overview -->  <!---This tutorial shows you how to run a simple Hello World Node.js app+This tutorial shows you how to run a sample app on Kubernetes using [Minikube](/docs/setup/learning-environment/minikube) and Katacoda. Katacoda provides a free, in-browser Kubernetes environment. -->-本教程向您展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。+本教程向你展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda+在 Kubernetes 上运行一个应用示例。Katacoda 提供免费的浏览器内 Kubernetes 环境。 -{{< note >}} <!-- You can also follow this tutorial if you've installed [Minikube locally](/docs/tasks/tools/install-minikube/). -->-如果您已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。-+{{< note >}}+如果你已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 {{< /note >}}  - ## {{% heading "objectives" %}} - <!-- * Deploy a hello world application to Minikube. * Run the app. * View application logs. -->-* 将 "Hello World" 应用程序部署到 Minikube。+* 将一个示例应用部署到 Minikube。

see https://github.com/kubernetes/website/pull/24465/files#diff-6086b884d55865ab09248ee626122302R24, 可能没有同步更新英文吧

tengqm

comment created time in 14 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

issue closedmosn/mosn

Does mson support istio 1.6.x?

Your question

Describe your question clearly.

Does mson support istio 1.6.x?

Is there a corresponding relationship diagram between mosn version and istio?

Environment

  • MOSN Version

Logs

  • Paste the logs you see.

closed time in 15 days

tanjunchen

issue commentmosn/mosn

Does mson support istio 1.6.x?

@wangfakang Thanks for your reply. I get it .

tanjunchen

comment created time in 15 days

issue closedmosn/mosn

In kubernetes 1.17, mosn 1.5.2 failed to replace envoy

Your question

According to https://katacoda.com/mosn/courses/istio/mosn-with-istio, it runs well in k8s(1.14.0),istio (1.5.2),mosn(1.5.2). I try to do it in k8s 1.17.0 , but there are some problems.

image

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"details","version":"v1"},"name":"details-v1","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"details","version":"v1"}},"strategy":{},"template":{"metadata":{"annotations":{"sidecar.istio.io/interceptionMode":"REDIRECT","sidecar.istio.io/status":"{\"version\":\"fca84600f9d5ec316cf1cf577da902f38bac258ab0fd595ee208ec0203dc0c6d\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"podinfo\",\"istiod-ca-cert\"],\"imagePullSecrets\":null}","traffic.sidecar.istio.io/excludeInboundPorts":"15020","traffic.sidecar.istio.io/includeInboundPorts":"9080","traffic.sidecar.istio.io/includeOutboundIPRanges":"*"},"creationTimestamp":null,"labels":{"app":"details","security.istio.io/tlsMode":"istio","version":"v1"}},"spec":{"containers":[{"image":"docker.io/istio/examples-bookinfo-details-v1:1.15.0","imagePullPolicy":"IfNotPresent","name":"details","ports":[{"containerPort":9080}],"resources":{}},{"args":["proxy","sidecar","--domain","$(POD_NAMESPACE).svc.cluster.local","--configPath","/etc/istio/proxy","--binaryPath","/usr/local/bin/mosn","--serviceCluster","details.$(POD_NAMESPACE)","--drainDuration","45s","--parentShutdownDuration","1m0s","--discoveryAddress","istiod.istio-system.svc:15012","--zipkinAddress","zipkin.istio-system:9411","--proxyLogLevel=warning","--proxyComponentLogLevel=misc:error","--connectTimeout","10s","--proxyAdminPort","15000","--concurrency","2","--controlPlaneAuthPolicy","NONE","--dnsRefreshRate","300s","--statusPort","15020","--trust-domain=cluster.local","--controlPlaneBootstrap=false"],"env":[{"name":"JWT_POLICY","value":"first-party-jwt"},{"name":"PILOT_CERT_PROVIDER","value":"istiod"},{"name":"CA_ADDR","value":"istio-pilot.istio-system.svc:15012"},{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"INSTANCE_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}},{"name":"SERVICE_ACCOUNT","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"ISTIO_META_POD_PORTS","value":"[\n    {\"containerPort\":9080}\n]"},{"name":"ISTIO_META_APP_CONTAINERS","value":"[\n    details\n]"},{"name":"ISTIO_META_CLUSTER_ID","value":"Kubernetes"},{"name":"ISTIO_META_POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"ISTIO_META_CONFIG_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"ISTIO_META_INTERCEPTION_MODE","value":"REDIRECT"},{"name":"ISTIO_META_WORKLOAD_NAME","value":"details-v1"},{"name":"ISTIO_META_OWNER","value":"kubernetes://apis/apps/v1/namespaces/default/deployments/details-v1"},{"name":"ISTIO_META_MESH_ID","value":"cluster.local"},{"name":"ISTIO_KUBE_APP_PROBERS","value":"{}"}],"image":"mosnio/proxyv2:1.5.2-mosn","imagePullPolicy":"IfNotPresent","name":"istio-proxy","ports":[{"containerPort":15090,"name":"http-envoy-prom","protocol":"TCP"}],"readinessProbe":{"failureThreshold":30,"httpGet":{"path":"/healthz/ready","port":15020},"initialDelaySeconds":1,"periodSeconds":2},"resources":{"limits":{"cpu":"2","memory":"1Gi"},"requests":{"cpu":"100m","memory":"128Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsGroup":1337,"runAsNonRoot":true,"runAsUser":1337},"volumeMounts":[{"mountPath":"/var/run/secrets/istio","name":"istiod-ca-cert"},{"mountPath":"/etc/istio/proxy","name":"istio-envoy"},{"mountPath":"/etc/istio/pod","name":"podinfo"}]}],"initContainers":[{"command":["istio-iptables","-p","15001","-z","15006","-u","1337","-m","REDIRECT","-i","*","-x","","-b","*","-d","15090,15020"],"image":"docker.io/istio/proxyv2:1.5.2","imagePullPolicy":"IfNotPresent","name":"istio-init","resources":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"10m","memory":"10Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"add":["NET_ADMIN","NET_RAW"],"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":false,"runAsGroup":0,"runAsNonRoot":false,"runAsUser":0}}],"securityContext":{"fsGroup":1337},"serviceAccountName":"bookinfo-details","volumes":[{"emptyDir":{"medium":"Memory"},"name":"istio-envoy"},{"downwardAPI":{"items":[{"fieldRef":{"fieldPath":"metadata.labels"},"path":"labels"},{"fieldRef":{"fieldPath":"metadata.annotations"},"path":"annotations"}]},"name":"podinfo"},{"configMap":{"name":"istio-ca-root-cert"},"name":"istiod-ca-cert"}]}}},"status":{}}
  creationTimestamp: "2020-10-09T06:14:05Z"
  generation: 1
  labels:
    app: details
    version: v1
  name: details-v1
  namespace: default
  resourceVersion: "1328104"
  selfLink: /apis/apps/v1/namespaces/default/deployments/details-v1
  uid: 1f6920bc-9356-49b8-bfd5-360194f2b8aa
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: details
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"fca84600f9d5ec316cf1cf577da902f38bac258ab0fd595ee208ec0203dc0c6d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: details
        security.istio.io/tlsMode: istio
        version: v1
    spec:
      containers:
      - image: docker.io/istio/examples-bookinfo-details-v1:1.15.0
        imagePullPolicy: IfNotPresent
        name: details
        ports:
        - containerPort: 9080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --configPath
        - /etc/istio/proxy
        - --binaryPath
        - /usr/local/bin/mosn
        - --serviceCluster
        - details.$(POD_NAMESPACE)
        - --drainDuration
        - 45s
        - --parentShutdownDuration
        - 1m0s
        - --discoveryAddress
        - istiod.istio-system.svc:15012
        - --zipkinAddress
        - zipkin.istio-system:9411
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --connectTimeout
        - 10s
        - --proxyAdminPort
        - "15000"
        - --concurrency
        - "2"
        - --controlPlaneAuthPolicy
        - NONE
        - --dnsRefreshRate
        - 300s
        - --statusPort
        - "15020"
        - --trust-domain=cluster.local
        - --controlPlaneBootstrap=false
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istio-pilot.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                details
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: ISTIO_META_CONFIG_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: details-v1
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/details-v1
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: mosnio/proxyv2:1.5.2-mosn
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15020
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 128Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: podinfo
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15020
        image: docker.io/istio/proxyv2:1.5.2
        imagePullPolicy: IfNotPresent
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1337
      serviceAccount: bookinfo-details
      serviceAccountName: bookinfo-details
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
      - configMap:
          defaultMode: 420
          name: istio-ca-root-cert
        name: istiod-ca-cert
status:
  conditions:
  - lastTransitionTime: "2020-10-09T06:14:05Z"
    lastUpdateTime: "2020-10-09T06:14:05Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2020-10-09T06:24:06Z"
    lastUpdateTime: "2020-10-09T06:24:06Z"
    message: ReplicaSet "details-v1-7f9bbc5cf9" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 1
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

Environment

root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl get pods -n istio-system
NAME                                   READY   STATUS             RESTARTS   AGE
istio-ingressgateway-fd9cc74f8-9c6mn   1/1     Running            0          4h9m
istiod-5bb879d86c-2xrft                1/1     Running            0          4h10m
prometheus-79bf89d485-68lpz            1/2     InvalidImageName   0          4h9m
root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl get pods 
NAME                              READY   STATUS             RESTARTS   AGE
details-v1-7f9bbc5cf9-q5mwn       1/2     CrashLoopBackOff   10         29m
productpage-v1-77474875fd-k987s   1/2     CrashLoopBackOff   10         29m
ratings-v1-6c8d855bf5-cwxdm       1/2     CrashLoopBackOff   10         29m
reviews-v1-764b669ddd-zgxmk       1/2     CrashLoopBackOff   10         29m
reviews-v2-fd7f585d-nbw95         1/2     CrashLoopBackOff   10         29m
reviews-v3-c85454984-sbct2        1/2     CrashLoopBackOff   10         29m
root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master:/home/k8s-master/istio-1.5.2# istioctl version
client version: 1.5.2
ingressgateway version: 51b751d94d56d3bdd66f6f414bf99b14ea25ddde-dirty
pilot version: 1.5.2
data plane version: 1.5.0 (1 proxies)

Logs

root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl logs -f details-v1-7f9bbc5cf9-q5mwn
error: a container name must be specified for pod details-v1-7f9bbc5cf9-q5mwn, choose one of: [details istio-proxy] or one of the init containers: [istio-init]
root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl logs -f details-v1-7f9bbc5cf9-q5mwn -c istio-proxy
2020-10-09T06:40:29.068305Z	info	FLAG: --binaryPath="/usr/local/bin/envoy"
2020-10-09T06:40:29.068326Z	info	FLAG: --concurrency="2"
2020-10-09T06:40:29.068329Z	info	FLAG: --configPath="/etc/istio/proxy"
2020-10-09T06:40:29.068332Z	info	FLAG: --connectTimeout="10s"
2020-10-09T06:40:29.068334Z	info	FLAG: --controlPlaneAuthPolicy="NONE"
2020-10-09T06:40:29.068337Z	info	FLAG: --controlPlaneBootstrap="false"
2020-10-09T06:40:29.068339Z	info	FLAG: --customConfigFile=""
2020-10-09T06:40:29.068395Z	info	FLAG: --datadogAgentAddress=""
2020-10-09T06:40:29.068398Z	info	FLAG: --disableInternalTelemetry="false"
2020-10-09T06:40:29.068400Z	info	FLAG: --discoveryAddress="istiod.istio-system.svc:15012"
2020-10-09T06:40:29.068402Z	info	FLAG: --dnsRefreshRate="300s"
2020-10-09T06:40:29.068404Z	info	FLAG: --domain="default.svc.cluster.local"
2020-10-09T06:40:29.068406Z	info	FLAG: --drainDuration="45s"
2020-10-09T06:40:29.068408Z	info	FLAG: --envoyAccessLogService=""
2020-10-09T06:40:29.068409Z	info	FLAG: --envoyMetricsService=""
2020-10-09T06:40:29.068411Z	info	FLAG: --help="false"
2020-10-09T06:40:29.068413Z	info	FLAG: --id=""
2020-10-09T06:40:29.068415Z	info	FLAG: --ip=""
2020-10-09T06:40:29.068416Z	info	FLAG: --lightstepAccessToken=""
2020-10-09T06:40:29.068418Z	info	FLAG: --lightstepAddress=""
2020-10-09T06:40:29.068420Z	info	FLAG: --lightstepCacertPath=""
2020-10-09T06:40:29.068422Z	info	FLAG: --lightstepSecure="false"
2020-10-09T06:40:29.068424Z	info	FLAG: --log_as_json="false"
2020-10-09T06:40:29.068425Z	info	FLAG: --log_caller=""
2020-10-09T06:40:29.068427Z	info	FLAG: --log_output_level="default:info"
2020-10-09T06:40:29.068429Z	info	FLAG: --log_rotate=""
2020-10-09T06:40:29.068431Z	info	FLAG: --log_rotate_max_age="30"
2020-10-09T06:40:29.068433Z	info	FLAG: --log_rotate_max_backups="1000"
2020-10-09T06:40:29.068438Z	info	FLAG: --log_rotate_max_size="104857600"
2020-10-09T06:40:29.068440Z	info	FLAG: --log_stacktrace_level="default:none"
2020-10-09T06:40:29.068445Z	info	FLAG: --log_target="[stdout]"
2020-10-09T06:40:29.068447Z	info	FLAG: --mixerIdentity=""
2020-10-09T06:40:29.068449Z	info	FLAG: --outlierLogPath=""
2020-10-09T06:40:29.068451Z	info	FLAG: --parentShutdownDuration="1m0s"
2020-10-09T06:40:29.068452Z	info	FLAG: --pilotIdentity=""
2020-10-09T06:40:29.068455Z	info	FLAG: --proxyAdminPort="15000"
2020-10-09T06:40:29.068457Z	info	FLAG: --proxyComponentLogLevel="misc:error"
2020-10-09T06:40:29.068458Z	info	FLAG: --proxyLogLevel="warning"
2020-10-09T06:40:29.068460Z	info	FLAG: --serviceCluster="details.default"
2020-10-09T06:40:29.068462Z	info	FLAG: --serviceregistry="Kubernetes"
2020-10-09T06:40:29.068464Z	info	FLAG: --statsdUdpAddress=""
2020-10-09T06:40:29.068466Z	info	FLAG: --statusPort="15020"
2020-10-09T06:40:29.068468Z	info	FLAG: --stsPort="0"
2020-10-09T06:40:29.068470Z	info	FLAG: --templateFile=""
2020-10-09T06:40:29.068472Z	info	FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2020-10-09T06:40:29.068474Z	info	FLAG: --trust-domain="cluster.local"
2020-10-09T06:40:29.068475Z	info	FLAG: --zipkinAddress="zipkin.istio-system:9411"
2020-10-09T06:40:29.068494Z	info	Version 51b751d94d56d3bdd66f6f414bf99b14ea25ddde-dirty-51b751d94d56d3bdd66f6f414bf99b14ea25ddde-dirty-Modified
2020-10-09T06:40:29.068648Z	info	Obtained private IP [172.20.85.218]
2020-10-09T06:40:29.068668Z	info	Proxy role: &model.Proxy{ClusterID:"", Type:"sidecar", IPAddresses:[]string{"172.20.85.218", "172.20.85.218"}, ID:"details-v1-7f9bbc5cf9-q5mwn.default", Locality:(*envoy_api_v2_core.Locality)(nil), DNSDomain:"default.svc.cluster.local", ConfigNamespace:"", Metadata:(*model.NodeMetadata)(nil), SidecarScope:(*model.SidecarScope)(nil), MergedGateway:(*model.MergedGateway)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:labels.Collection(nil), IstioVersion:(*model.IstioVersion)(nil)}
2020-10-09T06:40:29.068673Z	info	PilotSAN []string(nil)
2020-10-09T06:40:29.068674Z	info	MixerSAN []string(nil)
2020-10-09T06:40:29.069677Z	info	Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
envoyAccessLogService: {}
envoyMetricsService: {}
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: details.default
statNameLength: 189
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2020-10-09T06:40:29.069687Z	info	JWT policy is first-party-jwt
2020-10-09T06:40:29.069726Z	info	Using user-configured CA istio-pilot.istio-system.svc:15012
2020-10-09T06:40:29.069728Z	info	istiod uses self-issued certificate
2020-10-09T06:40:29.069796Z	info	the CA cert of istiod is: -----BEGIN CERTIFICATE-----
MIIC3TCCAcWgAwIBAgIQRhyBKy3iNkraK4BjaxVYsTANBgkqhkiG9w0BAQsFADAY
MRYwFAYDVQQKEw1jbHVzdGVyLmxvY2FsMB4XDTIwMTAwOTAyMzQxMloXDTMwMTAw
NzAyMzQxMlowGDEWMBQGA1UEChMNY2x1c3Rlci5sb2NhbDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBANDxbyYT+cvne1sWZaWVfXBB1fkCx52fteTZ4evH
mS1U8uBBYQrEkDJtrnALoB0mEGPepFE/iCSoQbnKyUi+v2e4xlQ6pveT6LB1yAxS
wtoAE8w/ebUcfaxsEUvihvw4S0vY0DUDihbK4GW+mRm1iuKi2nT0anabgJEFl4Hs
rUQjjdx7SwEwRoGJeFlUQh5mcgBEnU62pD0sZ0gChPxp6QJnLa7jQSmVij+Bkcug
alrsnv6bp3hRw2zhCzAjQqNoylLRSDw5CzoaNvg7fagEg6nXB2KlHYXghuf7Arb5
xGX+zs8w6RhEIHcP/z7Qc1EbnyvXJsmP7liRgK1ABwtCIycCAwEAAaMjMCEwDgYD
VR0PAQH/BAQDAgIEMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEB
AEM4IaKyxMQZHrdVLbvCdL49pNdFMUebkO7kZt96tXigvQohcJgFfMoq5Xa5C2Do
h0U1+7+83AhrZvKwqQPuev//gXk9FFoAbGbEt7KSKVn/crR8jUh7lxO9P0Of0KRz
fu4VhXv5zWvpBlhnEu/6q3PFTa6Y3YpeTt4NL8al4aAbOmkCRuS5MpdAIf6AaYyo
OljIUPXQUz0NuIudDdxHdv008Z9zcqQjPKiuOVLslt2nZfEVW+6nIWO+EhYvkA0h
FJNbIavttTSab8g8zLzGHnpoddN7X4E/obf+ywNDkFU6AHRxXVCdZKv4jaSRo+6S
/74nup1rNT90XedIyTphCv0=
-----END CERTIFICATE-----

2020-10-09T06:40:29.069917Z	info	parsed scheme: ""
2020-10-09T06:40:29.069924Z	info	scheme "" not registered, fallback to default scheme
2020-10-09T06:40:29.069935Z	info	ccResolverWrapper: sending update to cc: {[{istio-pilot.istio-system.svc:15012  <nil> 0 <nil>}] <nil> <nil>}
2020-10-09T06:40:29.069938Z	info	ClientConn switching balancer to "pick_first"
2020-10-09T06:40:29.081666Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc0002abe70, {CONNECTING <nil>}
2020-10-09T06:40:29.112756Z	info	sds	SDS gRPC server for workload UDS starts, listening on "/etc/istio/proxy/SDS" 

2020-10-09T06:40:29.112939Z	info	PilotSAN []string{"istiod.istio-system.svc"}
2020-10-09T06:40:29.113007Z	info	Starting proxy agent
2020-10-09T06:40:29.113216Z	info	sds	Start SDS grpc server
2020-10-09T06:40:29.113315Z	info	Opening status port 15020

2020-10-09T06:40:29.116826Z	info	Received new config, creating new Envoy epoch 0
2020-10-09T06:40:29.116865Z	info	Epoch 0 starting
2020-10-09T06:40:29.138023Z	info	Envoy command: [start --config /etc/istio/proxy/envoy-rev0.json --service-cluster details.default --service-node sidecar~172.20.85.218~details-v1-7f9bbc5cf9-q5mwn.default~default.svc.cluster.local]
2020-10-09T06:40:29.138262Z	error	Epoch 0 exited with error: fork/exec /usr/local/bin/envoy: no such file or directory
2020-10-09T06:40:29.138268Z	info	No more active epochs, terminating

closed time in 15 days

tanjunchen

issue commentmosn/mosn

In kubernetes 1.17, mosn 1.5.2 failed to replace envoy

@wangfakang sure. Thanks.

tanjunchen

comment created time in 15 days

fork tanjunchen/mosn

MOSN is a cloud native proxy for edge or service mesh. https://mosn.io

fork in 16 days

issue openedmosn/mosn

In kubernetes 1.17, mosn 1.5.2 failed to replace envoy

Your question

According to https://katacoda.com/mosn/courses/istio/mosn-with-istio, it runs well in k8s(1.14.0),istio (1.5.2),mosn(1.5.2). I try to do it in k8s 1.17.0 , but there are some problems.

image

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"details","version":"v1"},"name":"details-v1","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"details","version":"v1"}},"strategy":{},"template":{"metadata":{"annotations":{"sidecar.istio.io/interceptionMode":"REDIRECT","sidecar.istio.io/status":"{\"version\":\"fca84600f9d5ec316cf1cf577da902f38bac258ab0fd595ee208ec0203dc0c6d\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"podinfo\",\"istiod-ca-cert\"],\"imagePullSecrets\":null}","traffic.sidecar.istio.io/excludeInboundPorts":"15020","traffic.sidecar.istio.io/includeInboundPorts":"9080","traffic.sidecar.istio.io/includeOutboundIPRanges":"*"},"creationTimestamp":null,"labels":{"app":"details","security.istio.io/tlsMode":"istio","version":"v1"}},"spec":{"containers":[{"image":"docker.io/istio/examples-bookinfo-details-v1:1.15.0","imagePullPolicy":"IfNotPresent","name":"details","ports":[{"containerPort":9080}],"resources":{}},{"args":["proxy","sidecar","--domain","$(POD_NAMESPACE).svc.cluster.local","--configPath","/etc/istio/proxy","--binaryPath","/usr/local/bin/mosn","--serviceCluster","details.$(POD_NAMESPACE)","--drainDuration","45s","--parentShutdownDuration","1m0s","--discoveryAddress","istiod.istio-system.svc:15012","--zipkinAddress","zipkin.istio-system:9411","--proxyLogLevel=warning","--proxyComponentLogLevel=misc:error","--connectTimeout","10s","--proxyAdminPort","15000","--concurrency","2","--controlPlaneAuthPolicy","NONE","--dnsRefreshRate","300s","--statusPort","15020","--trust-domain=cluster.local","--controlPlaneBootstrap=false"],"env":[{"name":"JWT_POLICY","value":"first-party-jwt"},{"name":"PILOT_CERT_PROVIDER","value":"istiod"},{"name":"CA_ADDR","value":"istio-pilot.istio-system.svc:15012"},{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"INSTANCE_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}},{"name":"SERVICE_ACCOUNT","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"ISTIO_META_POD_PORTS","value":"[\n    {\"containerPort\":9080}\n]"},{"name":"ISTIO_META_APP_CONTAINERS","value":"[\n    details\n]"},{"name":"ISTIO_META_CLUSTER_ID","value":"Kubernetes"},{"name":"ISTIO_META_POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"ISTIO_META_CONFIG_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"ISTIO_META_INTERCEPTION_MODE","value":"REDIRECT"},{"name":"ISTIO_META_WORKLOAD_NAME","value":"details-v1"},{"name":"ISTIO_META_OWNER","value":"kubernetes://apis/apps/v1/namespaces/default/deployments/details-v1"},{"name":"ISTIO_META_MESH_ID","value":"cluster.local"},{"name":"ISTIO_KUBE_APP_PROBERS","value":"{}"}],"image":"mosnio/proxyv2:1.5.2-mosn","imagePullPolicy":"IfNotPresent","name":"istio-proxy","ports":[{"containerPort":15090,"name":"http-envoy-prom","protocol":"TCP"}],"readinessProbe":{"failureThreshold":30,"httpGet":{"path":"/healthz/ready","port":15020},"initialDelaySeconds":1,"periodSeconds":2},"resources":{"limits":{"cpu":"2","memory":"1Gi"},"requests":{"cpu":"100m","memory":"128Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsGroup":1337,"runAsNonRoot":true,"runAsUser":1337},"volumeMounts":[{"mountPath":"/var/run/secrets/istio","name":"istiod-ca-cert"},{"mountPath":"/etc/istio/proxy","name":"istio-envoy"},{"mountPath":"/etc/istio/pod","name":"podinfo"}]}],"initContainers":[{"command":["istio-iptables","-p","15001","-z","15006","-u","1337","-m","REDIRECT","-i","*","-x","","-b","*","-d","15090,15020"],"image":"docker.io/istio/proxyv2:1.5.2","imagePullPolicy":"IfNotPresent","name":"istio-init","resources":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"10m","memory":"10Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"add":["NET_ADMIN","NET_RAW"],"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":false,"runAsGroup":0,"runAsNonRoot":false,"runAsUser":0}}],"securityContext":{"fsGroup":1337},"serviceAccountName":"bookinfo-details","volumes":[{"emptyDir":{"medium":"Memory"},"name":"istio-envoy"},{"downwardAPI":{"items":[{"fieldRef":{"fieldPath":"metadata.labels"},"path":"labels"},{"fieldRef":{"fieldPath":"metadata.annotations"},"path":"annotations"}]},"name":"podinfo"},{"configMap":{"name":"istio-ca-root-cert"},"name":"istiod-ca-cert"}]}}},"status":{}}
  creationTimestamp: "2020-10-09T06:14:05Z"
  generation: 1
  labels:
    app: details
    version: v1
  name: details-v1
  namespace: default
  resourceVersion: "1328104"
  selfLink: /apis/apps/v1/namespaces/default/deployments/details-v1
  uid: 1f6920bc-9356-49b8-bfd5-360194f2b8aa
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: details
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"fca84600f9d5ec316cf1cf577da902f38bac258ab0fd595ee208ec0203dc0c6d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: details
        security.istio.io/tlsMode: istio
        version: v1
    spec:
      containers:
      - image: docker.io/istio/examples-bookinfo-details-v1:1.15.0
        imagePullPolicy: IfNotPresent
        name: details
        ports:
        - containerPort: 9080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --configPath
        - /etc/istio/proxy
        - --binaryPath
        - /usr/local/bin/mosn
        - --serviceCluster
        - details.$(POD_NAMESPACE)
        - --drainDuration
        - 45s
        - --parentShutdownDuration
        - 1m0s
        - --discoveryAddress
        - istiod.istio-system.svc:15012
        - --zipkinAddress
        - zipkin.istio-system:9411
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --connectTimeout
        - 10s
        - --proxyAdminPort
        - "15000"
        - --concurrency
        - "2"
        - --controlPlaneAuthPolicy
        - NONE
        - --dnsRefreshRate
        - 300s
        - --statusPort
        - "15020"
        - --trust-domain=cluster.local
        - --controlPlaneBootstrap=false
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istio-pilot.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                details
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: ISTIO_META_CONFIG_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: details-v1
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/details-v1
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: mosnio/proxyv2:1.5.2-mosn
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15020
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 128Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: podinfo
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15020
        image: docker.io/istio/proxyv2:1.5.2
        imagePullPolicy: IfNotPresent
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1337
      serviceAccount: bookinfo-details
      serviceAccountName: bookinfo-details
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
      - configMap:
          defaultMode: 420
          name: istio-ca-root-cert
        name: istiod-ca-cert
status:
  conditions:
  - lastTransitionTime: "2020-10-09T06:14:05Z"
    lastUpdateTime: "2020-10-09T06:14:05Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2020-10-09T06:24:06Z"
    lastUpdateTime: "2020-10-09T06:24:06Z"
    message: ReplicaSet "details-v1-7f9bbc5cf9" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 1
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

Environment

root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl get pods -n istio-system
NAME                                   READY   STATUS             RESTARTS   AGE
istio-ingressgateway-fd9cc74f8-9c6mn   1/1     Running            0          4h9m
istiod-5bb879d86c-2xrft                1/1     Running            0          4h10m
prometheus-79bf89d485-68lpz            1/2     InvalidImageName   0          4h9m
root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl get pods 
NAME                              READY   STATUS             RESTARTS   AGE
details-v1-7f9bbc5cf9-q5mwn       1/2     CrashLoopBackOff   10         29m
productpage-v1-77474875fd-k987s   1/2     CrashLoopBackOff   10         29m
ratings-v1-6c8d855bf5-cwxdm       1/2     CrashLoopBackOff   10         29m
reviews-v1-764b669ddd-zgxmk       1/2     CrashLoopBackOff   10         29m
reviews-v2-fd7f585d-nbw95         1/2     CrashLoopBackOff   10         29m
reviews-v3-c85454984-sbct2        1/2     CrashLoopBackOff   10         29m
root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master:/home/k8s-master/istio-1.5.2# istioctl version
client version: 1.5.2
ingressgateway version: 51b751d94d56d3bdd66f6f414bf99b14ea25ddde-dirty
pilot version: 1.5.2
data plane version: 1.5.0 (1 proxies)

Logs

root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl logs -f details-v1-7f9bbc5cf9-q5mwn
error: a container name must be specified for pod details-v1-7f9bbc5cf9-q5mwn, choose one of: [details istio-proxy] or one of the init containers: [istio-init]
root@k8s-master:/home/k8s-master/istio-1.5.2# kubectl logs -f details-v1-7f9bbc5cf9-q5mwn -c istio-proxy
2020-10-09T06:40:29.068305Z	info	FLAG: --binaryPath="/usr/local/bin/envoy"
2020-10-09T06:40:29.068326Z	info	FLAG: --concurrency="2"
2020-10-09T06:40:29.068329Z	info	FLAG: --configPath="/etc/istio/proxy"
2020-10-09T06:40:29.068332Z	info	FLAG: --connectTimeout="10s"
2020-10-09T06:40:29.068334Z	info	FLAG: --controlPlaneAuthPolicy="NONE"
2020-10-09T06:40:29.068337Z	info	FLAG: --controlPlaneBootstrap="false"
2020-10-09T06:40:29.068339Z	info	FLAG: --customConfigFile=""
2020-10-09T06:40:29.068395Z	info	FLAG: --datadogAgentAddress=""
2020-10-09T06:40:29.068398Z	info	FLAG: --disableInternalTelemetry="false"
2020-10-09T06:40:29.068400Z	info	FLAG: --discoveryAddress="istiod.istio-system.svc:15012"
2020-10-09T06:40:29.068402Z	info	FLAG: --dnsRefreshRate="300s"
2020-10-09T06:40:29.068404Z	info	FLAG: --domain="default.svc.cluster.local"
2020-10-09T06:40:29.068406Z	info	FLAG: --drainDuration="45s"
2020-10-09T06:40:29.068408Z	info	FLAG: --envoyAccessLogService=""
2020-10-09T06:40:29.068409Z	info	FLAG: --envoyMetricsService=""
2020-10-09T06:40:29.068411Z	info	FLAG: --help="false"
2020-10-09T06:40:29.068413Z	info	FLAG: --id=""
2020-10-09T06:40:29.068415Z	info	FLAG: --ip=""
2020-10-09T06:40:29.068416Z	info	FLAG: --lightstepAccessToken=""
2020-10-09T06:40:29.068418Z	info	FLAG: --lightstepAddress=""
2020-10-09T06:40:29.068420Z	info	FLAG: --lightstepCacertPath=""
2020-10-09T06:40:29.068422Z	info	FLAG: --lightstepSecure="false"
2020-10-09T06:40:29.068424Z	info	FLAG: --log_as_json="false"
2020-10-09T06:40:29.068425Z	info	FLAG: --log_caller=""
2020-10-09T06:40:29.068427Z	info	FLAG: --log_output_level="default:info"
2020-10-09T06:40:29.068429Z	info	FLAG: --log_rotate=""
2020-10-09T06:40:29.068431Z	info	FLAG: --log_rotate_max_age="30"
2020-10-09T06:40:29.068433Z	info	FLAG: --log_rotate_max_backups="1000"
2020-10-09T06:40:29.068438Z	info	FLAG: --log_rotate_max_size="104857600"
2020-10-09T06:40:29.068440Z	info	FLAG: --log_stacktrace_level="default:none"
2020-10-09T06:40:29.068445Z	info	FLAG: --log_target="[stdout]"
2020-10-09T06:40:29.068447Z	info	FLAG: --mixerIdentity=""
2020-10-09T06:40:29.068449Z	info	FLAG: --outlierLogPath=""
2020-10-09T06:40:29.068451Z	info	FLAG: --parentShutdownDuration="1m0s"
2020-10-09T06:40:29.068452Z	info	FLAG: --pilotIdentity=""
2020-10-09T06:40:29.068455Z	info	FLAG: --proxyAdminPort="15000"
2020-10-09T06:40:29.068457Z	info	FLAG: --proxyComponentLogLevel="misc:error"
2020-10-09T06:40:29.068458Z	info	FLAG: --proxyLogLevel="warning"
2020-10-09T06:40:29.068460Z	info	FLAG: --serviceCluster="details.default"
2020-10-09T06:40:29.068462Z	info	FLAG: --serviceregistry="Kubernetes"
2020-10-09T06:40:29.068464Z	info	FLAG: --statsdUdpAddress=""
2020-10-09T06:40:29.068466Z	info	FLAG: --statusPort="15020"
2020-10-09T06:40:29.068468Z	info	FLAG: --stsPort="0"
2020-10-09T06:40:29.068470Z	info	FLAG: --templateFile=""
2020-10-09T06:40:29.068472Z	info	FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2020-10-09T06:40:29.068474Z	info	FLAG: --trust-domain="cluster.local"
2020-10-09T06:40:29.068475Z	info	FLAG: --zipkinAddress="zipkin.istio-system:9411"
2020-10-09T06:40:29.068494Z	info	Version 51b751d94d56d3bdd66f6f414bf99b14ea25ddde-dirty-51b751d94d56d3bdd66f6f414bf99b14ea25ddde-dirty-Modified
2020-10-09T06:40:29.068648Z	info	Obtained private IP [172.20.85.218]
2020-10-09T06:40:29.068668Z	info	Proxy role: &model.Proxy{ClusterID:"", Type:"sidecar", IPAddresses:[]string{"172.20.85.218", "172.20.85.218"}, ID:"details-v1-7f9bbc5cf9-q5mwn.default", Locality:(*envoy_api_v2_core.Locality)(nil), DNSDomain:"default.svc.cluster.local", ConfigNamespace:"", Metadata:(*model.NodeMetadata)(nil), SidecarScope:(*model.SidecarScope)(nil), MergedGateway:(*model.MergedGateway)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:labels.Collection(nil), IstioVersion:(*model.IstioVersion)(nil)}
2020-10-09T06:40:29.068673Z	info	PilotSAN []string(nil)
2020-10-09T06:40:29.068674Z	info	MixerSAN []string(nil)
2020-10-09T06:40:29.069677Z	info	Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
envoyAccessLogService: {}
envoyMetricsService: {}
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: details.default
statNameLength: 189
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2020-10-09T06:40:29.069687Z	info	JWT policy is first-party-jwt
2020-10-09T06:40:29.069726Z	info	Using user-configured CA istio-pilot.istio-system.svc:15012
2020-10-09T06:40:29.069728Z	info	istiod uses self-issued certificate
2020-10-09T06:40:29.069796Z	info	the CA cert of istiod is: -----BEGIN CERTIFICATE-----
MIIC3TCCAcWgAwIBAgIQRhyBKy3iNkraK4BjaxVYsTANBgkqhkiG9w0BAQsFADAY
MRYwFAYDVQQKEw1jbHVzdGVyLmxvY2FsMB4XDTIwMTAwOTAyMzQxMloXDTMwMTAw
NzAyMzQxMlowGDEWMBQGA1UEChMNY2x1c3Rlci5sb2NhbDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBANDxbyYT+cvne1sWZaWVfXBB1fkCx52fteTZ4evH
mS1U8uBBYQrEkDJtrnALoB0mEGPepFE/iCSoQbnKyUi+v2e4xlQ6pveT6LB1yAxS
wtoAE8w/ebUcfaxsEUvihvw4S0vY0DUDihbK4GW+mRm1iuKi2nT0anabgJEFl4Hs
rUQjjdx7SwEwRoGJeFlUQh5mcgBEnU62pD0sZ0gChPxp6QJnLa7jQSmVij+Bkcug
alrsnv6bp3hRw2zhCzAjQqNoylLRSDw5CzoaNvg7fagEg6nXB2KlHYXghuf7Arb5
xGX+zs8w6RhEIHcP/z7Qc1EbnyvXJsmP7liRgK1ABwtCIycCAwEAAaMjMCEwDgYD
VR0PAQH/BAQDAgIEMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEB
AEM4IaKyxMQZHrdVLbvCdL49pNdFMUebkO7kZt96tXigvQohcJgFfMoq5Xa5C2Do
h0U1+7+83AhrZvKwqQPuev//gXk9FFoAbGbEt7KSKVn/crR8jUh7lxO9P0Of0KRz
fu4VhXv5zWvpBlhnEu/6q3PFTa6Y3YpeTt4NL8al4aAbOmkCRuS5MpdAIf6AaYyo
OljIUPXQUz0NuIudDdxHdv008Z9zcqQjPKiuOVLslt2nZfEVW+6nIWO+EhYvkA0h
FJNbIavttTSab8g8zLzGHnpoddN7X4E/obf+ywNDkFU6AHRxXVCdZKv4jaSRo+6S
/74nup1rNT90XedIyTphCv0=
-----END CERTIFICATE-----

2020-10-09T06:40:29.069917Z	info	parsed scheme: ""
2020-10-09T06:40:29.069924Z	info	scheme "" not registered, fallback to default scheme
2020-10-09T06:40:29.069935Z	info	ccResolverWrapper: sending update to cc: {[{istio-pilot.istio-system.svc:15012  <nil> 0 <nil>}] <nil> <nil>}
2020-10-09T06:40:29.069938Z	info	ClientConn switching balancer to "pick_first"
2020-10-09T06:40:29.081666Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc0002abe70, {CONNECTING <nil>}
2020-10-09T06:40:29.112756Z	info	sds	SDS gRPC server for workload UDS starts, listening on "/etc/istio/proxy/SDS" 

2020-10-09T06:40:29.112939Z	info	PilotSAN []string{"istiod.istio-system.svc"}
2020-10-09T06:40:29.113007Z	info	Starting proxy agent
2020-10-09T06:40:29.113216Z	info	sds	Start SDS grpc server
2020-10-09T06:40:29.113315Z	info	Opening status port 15020

2020-10-09T06:40:29.116826Z	info	Received new config, creating new Envoy epoch 0
2020-10-09T06:40:29.116865Z	info	Epoch 0 starting
2020-10-09T06:40:29.138023Z	info	Envoy command: [start --config /etc/istio/proxy/envoy-rev0.json --service-cluster details.default --service-node sidecar~172.20.85.218~details-v1-7f9bbc5cf9-q5mwn.default~default.svc.cluster.local]
2020-10-09T06:40:29.138262Z	error	Epoch 0 exited with error: fork/exec /usr/local/bin/envoy: no such file or directory
2020-10-09T06:40:29.138268Z	info	No more active epochs, terminating

created time in 16 days

PullRequestReviewEvent

delete branch tanjunchen/kubernetes

delete branch : replace-k8s-master

delete time in 16 days

issue commentmosn/mosn

Does mson support istio 1.6.x?

I see it from https://mosn.io/docs/community/, from image, it seems that we can support 1.7.x now , really?

tanjunchen

comment created time in 16 days

push eventtanjunchen/grpc-debug-k8s

tanjunchen

commit sha 57bcfba427aa4b4061ca8f2f1a96ccb2d040489d

add test

view details

push time in 17 days

push eventtanjunchen/grpc-debug-k8s

tanjunchen

commit sha d70fff3e27edb44f7edbaf4f8387d94d272bed7b

test

view details

push time in 18 days

create barnchtanjunchen/grpc-debug-k8s

branch : master

created branch time in 18 days

created repositorytanjunchen/grpc-debug-k8s

created time in 18 days

PullRequestReviewEvent
more