Browse Source

DDDDDDDDD

yangxg 4 years ago
commit
fd38c5a173
6 changed files with 1117 additions and 0 deletions
  1. 1 0
      Dockerfile-kibana
  2. 871 0
      README.md
  3. 8 0
      build/Dockerfile
  4. 65 0
      deploy-tmp-kibana.yaml
  5. 93 0
      drone.kibana.yml
  6. 79 0
      k8s-deploy.sh

+ 1 - 0
Dockerfile-kibana

@@ -0,0 +1 @@
+FROM docker.elastic.co/kibana/kibana:7.6.2

+ 871 - 0
README.md

@@ -0,0 +1,871 @@
+---
+title: kubectl 备忘单
+content_type: concept
+card:
+name: reference
+weight: 30
+---
+<!-- ---
+title: kubectl Cheat Sheet
+reviewers:
+- erictune
+- krousey
+- clove
+content_type: concept
+card:
+  name: reference
+  weight: 30
+--- -->
+
+<!-- overview -->
+
+<!--
+This page contains a list of commonly used `kubectl` commands and flags.
+-->
+
+本页列举了常用的 “kubectl” 命令和标志
+
+<!-- body -->
+
+<!--
+## Kubectl autocomplete
+
+### BASH
+-->
+
+## Kubectl 自动补全
+
+### BASH
+
+<!--
+```bash
+source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
+echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
+```
+
+You can also use a shorthand alias for `kubectl` that also works with completion:
+-->
+```bash
+source <(kubectl completion bash) # 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。
+echo "source <(kubectl completion bash)" >> ~/.bashrc # 在您的 bash shell 中永久的添加自动补全
+```
+
+您还可以为 `kubectl` 使用一个速记别名,该别名也可以与 completion 一起使用:
+
+```bash
+alias k=kubectl
+complete -F __start_kubectl k
+```
+
+### ZSH
+
+<!--
+```bash
+source <(kubectl completion zsh)  # setup autocomplete in zsh into the current shell
+echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell
+```
+-->
+```bash
+source <(kubectl completion zsh)  # 在 zsh 中设置当前 shell 的自动补全
+echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # 在您的 zsh shell 中永久的添加自动补全
+```
+
+<!--
+## Kubectl Context and Configuration
+
+Set which Kubernetes cluster `kubectl` communicates with and modifies configuration
+information. See [Authenticating Across Clusters with kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) documentation for
+detailed config file information.
+-->
+##  Kubectl 上下文和配置
+
+设置 `kubectl` 与哪个 Kubernetes 集群进行通信并修改配置信息。
+查看[使用 kubeconfig 跨集群授权访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
+文档获取配置文件详细信息。
+
+<!--
+```bash
+kubectl config view # Show Merged kubeconfig settings.
+
+# use multiple kubeconfig files at the same time and view merged config
+KUBECONFIG=~/.kube/config:~/.kube/kubconfig2
+
+kubectl config view
+
+# get the password for the e2e user
+kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
+
+kubectl config view -o jsonpath='{.users[].name}'    # display the first user
+kubectl config view -o jsonpath='{.users[*].name}'   # get a list of users
+kubectl config get-contexts                          # display list of contexts
+kubectl config current-context           # display the current-context
+kubectl config use-context my-cluster-name           # set the default context to my-cluster-name
+
+# add a new user to your kubeconf that supports basic auth
+kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword
+
+# permanently save the namespace for all subsequent kubectl commands in that context.
+kubectl config set-context --current --namespace=ggckad-s2
+
+# set a context utilizing a specific username and namespace.
+kubectl config set-context gce --user=cluster-admin --namespace=foo \
+  && kubectl config use-context gce
+
+kubectl config unset users.foo                       # delete user foo
+```
+-->
+```bash
+kubectl config view # 显示合并的 kubeconfig 配置。
+
+# 同时使用多个 kubeconfig 文件并查看合并的配置
+KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view
+
+# 获取 e2e 用户的密码
+kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
+
+kubectl config view -o jsonpath='{.users[].name}'    # 显示第一个用户
+kubectl config view -o jsonpath='{.users[*].name}'   # 获取用户列表
+kubectl config get-contexts                          # 显示上下文列表
+kubectl config current-context                       # 展示当前所处的上下文
+kubectl config use-context my-cluster-name           # 设置默认的上下文为 my-cluster-name
+
+# 添加新的用户配置到 kubeconf 中,使用 basic auth 进行身份认证
+kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword
+
+# 在指定上下文中持久性地保存名字空间,供所有后续 kubectl 命令使用
+kubectl config set-context --current --namespace=ggckad-s2
+
+# 使用特定的用户名和名字空间设置上下文
+kubectl config set-context gce --user=cluster-admin --namespace=foo \
+  && kubectl config use-context gce
+
+kubectl config unset users.foo                       # 删除用户 foo
+```
+
+<!--
+## Kubectl apply
+`apply` manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running `kubectl apply`. This is the recommended way of managing Kubernetes applications on production. See [Kubectl Book](https://kubectl.docs.kubernetes.io).
+-->
+## Kubectl apply
+`apply` 通过定义 Kubernetes 资源的文件来管理应用。
+它通过运行 `kubectl apply` 在集群中创建和更新资源。
+这是在生产中管理 Kubernetes 应用的推荐方法。
+参见 [Kubectl 文档](https://kubectl.docs.kubernetes.io)。
+
+<!--
+## Creating objects
+
+Kubernetes manifests can be defined in YAML or JSON. The file extension `.yaml`,
+`.yml`, and `.json` can be used.
+-->
+## 创建对象 {#creating-objects}
+
+Kubernetes 配置可以用 YAML 或 JSON 定义。可以使用的文件扩展名有
+`.yaml`、`.yml` 和 `.json`。
+
+<!--
+```bash
+kubectl apply -f ./my-manifest.yaml            # create resource(s)
+kubectl apply -f ./my1.yaml -f ./my2.yaml      # create from multiple files
+kubectl apply -f ./dir                         # create resource(s) in all manifest files in dir
+kubectl apply -f https://git.io/vPieo          # create resource(s) from url
+kubectl create deployment nginx --image=nginx  # start a single instance of nginx
+
+# create a Job which prints "Hello World"
+kubectl create job hello --image=busybox -- echo "Hello World" 
+
+# create a CronJob that prints "Hello World" every minute
+kubectl create cronjob hello --image=busybox   --schedule="*/1 * * * *" -- echo "Hello World"    
+
+kubectl explain pods                           # get the documentation for pod manifests
+
+# Create multiple YAML objects from stdin
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Pod
+metadata:
+  name: busybox-sleep
+spec:
+  containers:
+  - name: busybox
+    image: busybox
+    args:
+    - sleep
+    - "1000000"
+---
+apiVersion: v1
+kind: Pod
+metadata:
+  name: busybox-sleep-less
+spec:
+  containers:
+  - name: busybox
+    image: busybox
+    args:
+    - sleep
+    - "1000"
+EOF
+
+# Create a secret with several keys
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Secret
+metadata:
+  name: mysecret
+type: Opaque
+data:
+  password: $(echo -n "s33msi4" | base64 -w0)
+  username: $(echo -n "jane" | base64 -w0)
+EOF
+```
+-->
+```bash
+kubectl apply -f ./my-manifest.yaml           # 创建资源
+kubectl apply -f ./my1.yaml -f ./my2.yaml     # 使用多个文件创建
+kubectl apply -f ./dir                        # 基于目录下的所有清单文件创建资源
+kubectl apply -f https://git.io/vPieo         # 从 URL 中创建资源
+kubectl create deployment nginx --image=nginx # 启动单实例 nginx
+
+# 创建一个打印 “Hello World” 的 Job
+kubectl create job hello --image=busybox -- echo "Hello World" 
+
+# 创建一个打印 “Hello World” 间隔1分钟的 CronJob
+kubectl create cronjob hello --image=busybox   --schedule="*/1 * * * *" -- echo "Hello World"    
+
+kubectl explain pods                          # 获取 pod 清单的文档说明
+
+# 从标准输入创建多个 YAML 对象
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Pod
+metadata:
+  name: busybox-sleep
+spec:
+  containers:
+  - name: busybox
+    image: busybox
+    args:
+    - sleep
+    - "1000000"
+---
+apiVersion: v1
+kind: Pod
+metadata:
+  name: busybox-sleep-less
+spec:
+  containers:
+  - name: busybox
+    image: busybox
+    args:
+    - sleep
+    - "1000"
+EOF
+
+# 创建有多个 key 的 Secret
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Secret
+metadata:
+  name: mysecret
+type: Opaque
+data:
+  password: $(echo -n "s33msi4" | base64 -w0)
+  username: $(echo -n "jane" | base64 -w0)
+EOF
+```
+
+<!--
+## Viewing, finding resources
+-->
+## 查看和查找资源
+
+<!--
+```bash
+# Get commands with basic output
+kubectl get services                          # List all services in the namespace
+kubectl get pods --all-namespaces             # List all pods in all namespaces
+kubectl get pods -o wide                      # List all pods in the current namespace, with more details
+kubectl get deployment my-dep                 # List a particular deployment
+kubectl get pods                              # List all pods in the namespace
+kubectl get pod my-pod -o yaml                # Get a pod's YAML
+
+# Describe commands with verbose output
+kubectl describe nodes my-node
+kubectl describe pods my-pod
+
+# List Services Sorted by Name
+kubectl get services --sort-by=.metadata.name
+
+# List pods Sorted by Restart Count
+kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
+
+# List PersistentVolumes sorted by capacity
+kubectl get pv --sort-by=.spec.capacity.storage
+
+# Get the version label of all pods with label app=cassandra
+kubectl get pods --selector=app=cassandra -o \
+  jsonpath='{.items[*].metadata.labels.version}'
+
+# Retrieve the value of a key with dots, e.g. 'ca.crt'
+kubectl get configmap myconfig \
+  -o jsonpath='{.data.ca\.crt}'
+
+# Get all worker nodes (use a selector to exclude results that have a label
+# named 'node-role.kubernetes.io/master')
+kubectl get node --selector='!node-role.kubernetes.io/master'
+
+# Get all running pods in the namespace
+kubectl get pods --field-selector=status.phase=Running
+
+# Get ExternalIPs of all nodes
+kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
+
+# List Names of Pods that belong to Particular RC
+# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
+sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
+echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})
+
+# Show labels for all pods (or any other Kubernetes object that supports labelling)
+kubectl get pods --show-labels
+
+# Check which nodes are ready
+JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
+ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
+
+# Output decoded secrets without external tools
+kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
+
+# List all Secrets currently in use by a pod
+kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
+
+# List all containerIDs of initContainer of all pods
+# Helpful when cleaning up stopped containers, while avoiding removal of initContainers.
+kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3
+
+# List Events sorted by timestamp
+kubectl get events --sort-by=.metadata.creationTimestamp
+
+# Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.
+kubectl diff -f ./my-manifest.yaml
+
+# Produce a period-delimited tree of all keys returned for nodes
+# Helpful when locating a key within a complex nested JSON structure
+kubectl get nodes -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
+
+# Produce a period-delimited tree of all keys returned for pods, etc
+kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
+
+# Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported.
+# Helpful when running any supported command across all pods, not just `env`
+for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod env; done
+```
+-->
+```bash
+# get 命令的基本输出
+kubectl get services                          # 列出当前命名空间下的所有 services
+kubectl get pods --all-namespaces             # 列出所有命名空间下的全部的 Pods
+kubectl get pods -o wide                      # 列出当前命名空间下的全部 Pods,并显示更详细的信息
+kubectl get deployment my-dep                 # 列出某个特定的 Deployment
+kubectl get pods                              # 列出当前命名空间下的全部 Pods
+kubectl get pod my-pod -o yaml                # 获取一个 pod 的 YAML
+
+# describe 命令的详细输出
+kubectl describe nodes my-node
+kubectl describe pods my-pod
+
+# 列出当前名字空间下所有 Services,按名称排序
+kubectl get services --sort-by=.metadata.name
+
+# 列出 Pods,按重启次数排序
+kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
+
+# 列举所有 PV 持久卷,按容量排序
+kubectl get pv --sort-by=.spec.capacity.storage
+
+# 获取包含 app=cassandra 标签的所有 Pods 的 version 标签
+kubectl get pods --selector=app=cassandra -o \
+  jsonpath='{.items[*].metadata.labels.version}'
+
+# 检索带有 “.” 键值,例: 'ca.crt'
+kubectl get configmap myconfig \
+  -o jsonpath='{.data.ca\.crt}'
+
+# 获取所有工作节点(使用选择器以排除标签名称为 'node-role.kubernetes.io/master' 的结果)
+kubectl get node --selector='!node-role.kubernetes.io/master'
+
+# 获取当前命名空间中正在运行的 Pods
+kubectl get pods --field-selector=status.phase=Running
+
+# 获取全部节点的 ExternalIP 地址
+kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
+
+# 列出属于某个特定 RC 的 Pods 的名称
+# 在转换对于 jsonpath 过于复杂的场合,"jq" 命令很有用;可以在 https://stedolan.github.io/jq/ 找到它。
+sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
+echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})
+
+# 显示所有 Pods 的标签(或任何其他支持标签的 Kubernetes 对象)
+kubectl get pods --show-labels
+
+# 检查哪些节点处于就绪状态
+JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
+ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
+
+# 不使用外部工具来输出解码后的 Secret
+kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
+
+# 列出被一个 Pod 使用的全部 Secret
+kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
+
+# 列举所有 Pods 中初始化容器的容器 ID(containerID)
+# 可用于在清理已停止的容器时避免删除初始化容器
+kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3
+
+# 列出事件(Events),按时间戳排序
+kubectl get events --sort-by=.metadata.creationTimestamp
+
+# 比较当前的集群状态和假定某清单被应用之后的集群状态
+kubectl diff -f ./my-manifest.yaml
+
+# 生成一个句点分隔的树,其中包含为节点返回的所有键
+# 在复杂的嵌套JSON结构中定位键时非常有用
+kubectl get nodes -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
+
+# 生成一个句点分隔的树,其中包含为pod等返回的所有键
+kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
+
+# 假设你的 Pods 有默认的容器和默认的名字空间,并且支持 'env' 命令,可以使用以下脚本为所有 Pods 生成 ENV 变量。
+# 该脚本也可用于在所有的 Pods 里运行任何受支持的命令,而不仅仅是 'env'。 
+for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod env; done
+```
+
+<!--
+## Updating resources
+-->
+## 更新资源
+
+<!--
+```bash
+kubectl set image deployment/frontend www=image:v2               # Rolling update "www" containers of "frontend" deployment, updating the image
+kubectl rollout history deployment/frontend                      # Check the history of deployments including the revision
+kubectl rollout undo deployment/frontend                         # Rollback to the previous deployment
+kubectl rollout undo deployment/frontend --to-revision=2         # Rollback to a specific revision
+kubectl rollout status -w deployment/frontend                    # Watch rolling update status of "frontend" deployment until completion
+kubectl rollout restart deployment/frontend                      # Rolling restart of the "frontend" deployment
+
+cat pod.json | kubectl replace -f -                              # Replace a pod based on the JSON passed into std
+
+# Force replace, delete and then re-create the resource. Will cause a service outage.
+kubectl replace --force -f ./pod.json
+
+# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000
+kubectl expose rc nginx --port=80 --target-port=8000
+
+# Update a single-container pod's image version (tag) to v4
+kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -
+
+kubectl label pods my-pod new-label=awesome                      # Add a Label
+kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq       # Add an annotation
+kubectl autoscale deployment foo --min=2 --max=10                # Auto scale a deployment "foo"
+```
+-->
+```bash
+kubectl set image deployment/frontend www=image:v2               # 滚动更新 "frontend" Deployment 的 "www" 容器镜像
+kubectl rollout history deployment/frontend                      # 检查 Deployment 的历史记录,包括版本
+kubectl rollout undo deployment/frontend                         # 回滚到上次部署版本
+kubectl rollout undo deployment/frontend --to-revision=2         # 回滚到特定部署版本
+kubectl rollout status -w deployment/frontend                    # 监视 "frontend" Deployment 的滚动升级状态直到完成
+kubectl rollout restart deployment/frontend                      # 轮替重启 "frontend" Deployment
+
+cat pod.json | kubectl replace -f -                              # 通过传入到标准输入的 JSON 来替换 Pod
+
+# 强制替换,删除后重建资源。会导致服务不可用。
+kubectl replace --force -f ./pod.json
+
+# 为多副本的 nginx 创建服务,使用 80 端口提供服务,连接到容器的 8000 端口。
+kubectl expose rc nginx --port=80 --target-port=8000
+
+# 将某单容器 Pod 的镜像版本(标签)更新到 v4
+kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -
+
+kubectl label pods my-pod new-label=awesome                      # 添加标签
+kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq       # 添加注解
+kubectl autoscale deployment foo --min=2 --max=10                # 对 "foo" Deployment 自动伸缩容
+```
+
+<!-- ## Patching resources -->
+## 部分更新资源
+
+<!--
+```bash
+# Partially update a node
+kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
+
+# Update a container's image; spec.containers[*].name is required because it's a merge key
+kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
+
+# Update a container's image using a json patch with positional arrays
+kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
+
+# Disable a deployment livenessProbe using a json patch with positional arrays
+kubectl patch deployment valid-deployment  --type json   -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
+
+# Add a new element to a positional array
+kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'
+```
+-->
+```bash
+# 部分更新某节点
+kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
+
+# 更新容器的镜像;spec.containers[*].name 是必须的。因为它是一个合并性质的主键。
+kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
+
+# 使用带位置数组的 JSON patch 更新容器的镜像
+kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
+
+# 使用带位置数组的 JSON patch 禁用某 Deployment 的 livenessProbe
+kubectl patch deployment valid-deployment  --type json   -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
+
+# 在带位置数组中添加元素
+kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'
+```
+
+<!--
+## Editing resources
+
+Edit any API resource in your preferred editor.
+-->
+## 编辑资源
+
+使用你偏爱的编辑器编辑 API 资源。
+
+<!--
+```bash
+kubectl edit svc/docker-registry                      # Edit the service named docker-registry
+KUBE_EDITOR="nano" kubectl edit svc/docker-registry   # Use an alternative editor
+```
+-->
+```bash
+kubectl edit svc/docker-registry                      # 编辑名为 docker-registry 的服务
+KUBE_EDITOR="nano" kubectl edit svc/docker-registry   # 使用其他编辑器
+```
+
+<!--
+## Scaling resources
+-->
+## 对资源进行伸缩
+
+<!-- ```bash
+kubectl scale --replicas=3 rs/foo                                 # Scale a replicaset named 'foo' to 3
+kubectl scale --replicas=3 -f foo.yaml                            # Scale a resource specified in "foo.yaml" to 3
+kubectl scale --current-replicas=2 --replicas=3 deployment/mysql  # If the deployment named mysql's current size is 2, scale mysql to 3
+kubectl scale --replicas=5 rc/foo rc/bar rc/baz                   # Scale multiple replication controllers
+```
+-->
+```bash
+kubectl scale --replicas=3 rs/foo                                 # 将名为 'foo' 的副本集伸缩到 3 副本
+kubectl scale --replicas=3 -f foo.yaml                            # 将在 "foo.yaml" 中的特定资源伸缩到 3 个副本
+kubectl scale --current-replicas=2 --replicas=3 deployment/mysql  # 如果名为 mysql 的 Deployment 的副本当前是 2,那么将它伸缩到 3
+kubectl scale --replicas=5 rc/foo rc/bar rc/baz                   # 伸缩多个副本控制器
+```
+
+<!--
+## Deleting resources
+-->
+## 删除资源
+
+<!-- ```bash
+kubectl delete -f ./pod.json                                              # Delete a pod using the type and name specified in pod.json
+kubectl delete pod,service baz foo                                        # Delete pods and services with same names "baz" and "foo"
+kubectl delete pods,services -l name=myLabel                              # Delete pods and services with label name=myLabel
+kubectl -n my-ns delete pod,svc --all                                      # Delete all pods and services in namespace my-ns,
+# Delete all pods matching the awk pattern1 or pattern2
+kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs  kubectl delete -n mynamespace pod
+```
+-->
+```bash
+kubectl delete -f ./pod.json                                              # 删除在 pod.json 中指定的类型和名称的 Pod
+kubectl delete pod,service baz foo                                        # 删除名称为 "baz" 和 "foo" 的 Pod 和服务
+kubectl delete pods,services -l name=myLabel                              # 删除包含 name=myLabel 标签的 pods 和服务
+kubectl -n my-ns delete pod,svc --all                                     # 删除在 my-ns 名字空间中全部的 Pods 和服务
+# 删除所有与 pattern1 或 pattern2 awk 模式匹配的 Pods
+kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs  kubectl delete -n mynamespace pod
+```
+
+<!--
+## Interacting with running Pods
+-->
+## 与运行中的 Pods 进行交互
+
+<!--
+```bash
+kubectl logs my-pod                                 # dump pod logs (stdout)
+kubectl logs -l name=myLabel                        # dump pod logs, with label name=myLabel (stdout)
+kubectl logs my-pod --previous                      # dump pod logs (stdout) for a previous instantiation of a container
+kubectl logs my-pod -c my-container                 # dump pod container logs (stdout, multi-container case)
+kubectl logs -l name=myLabel -c my-container        # dump pod logs, with label name=myLabel (stdout)
+kubectl logs my-pod -c my-container --previous      # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
+kubectl logs -f my-pod                              # stream pod logs (stdout)
+kubectl logs -f my-pod -c my-container              # stream pod container logs (stdout, multi-container case)
+kubectl logs -f -l name=myLabel --all-containers    # stream all pods logs with label name=myLabel (stdout)
+kubectl run -i --tty busybox --image=busybox -- sh  # Run pod as interactive shell
+kubectl run nginx --image=nginx -n 
+mynamespace                                         # Run pod nginx in a specific namespace
+kubectl run nginx --image=nginx                     # Run pod nginx and write its spec into a file called pod.yaml
+--dry-run=client -o yaml > pod.yaml
+
+kubectl attach my-pod -i                            # Attach to Running Container
+kubectl port-forward my-pod 5000:6000               # Listen on port 5000 on the local machine and forward to port 6000 on my-pod
+kubectl exec my-pod -- ls /                         # Run command in existing pod (1 container case)
+kubectl exec --stdin --tty my-pod -- /bin/sh        # Interactive shell access to a running pod (1 container case) 
+kubectl exec my-pod -c my-container -- ls /         # Run command in existing pod (multi-container case)
+kubectl top pod POD_NAME --containers               # Show metrics for a given pod and its containers
+kubectl top pod POD_NAME --sort-by=cpu              # Show metrics for a given pod and sort it by 'cpu' or 'memory'
+```
+-->
+```bash
+kubectl logs my-pod                                 # 获取 pod 日志(标准输出)
+kubectl logs -l name=myLabel                        # 获取含 name=myLabel 标签的 Pods 的日志(标准输出)
+kubectl logs my-pod --previous                      # 获取上个容器实例的 pod 日志(标准输出)
+kubectl logs my-pod -c my-container                 # 获取 Pod 容器的日志(标准输出, 多容器场景)
+kubectl logs -l name=myLabel -c my-container        # 获取含 name=myLabel 标签的 Pod 容器日志(标准输出, 多容器场景)
+kubectl logs my-pod -c my-container --previous      # 获取 Pod 中某容器的上个实例的日志(标准输出, 多容器场景)
+kubectl logs -f my-pod                              # 流式输出 Pod 的日志(标准输出)
+kubectl logs -f my-pod -c my-container              # 流式输出 Pod 容器的日志(标准输出, 多容器场景)
+kubectl logs -f -l name=myLabel --all-containers    # 流式输出含 name=myLabel 标签的 Pod 的所有日志(标准输出)
+kubectl run -i --tty busybox --image=busybox -- sh  # 以交互式 Shell 运行 Pod
+kubectl run nginx --image=nginx -n mynamespace      # 在指定名字空间中运行 nginx Pod
+kubectl run nginx --image=nginx                     # 运行 ngins Pod 并将其规约写入到名为 pod.yaml 的文件
+  --dry-run=client -o yaml > pod.yaml
+
+kubectl attach my-pod -i                            # 挂接到一个运行的容器中
+kubectl port-forward my-pod 5000:6000               # 在本地计算机上侦听端口 5000 并转发到 my-pod 上的端口 6000
+kubectl exec my-pod -- ls /                         # 在已有的 Pod 中运行命令(单容器场景)
+kubectl exec --stdin --tty my-pod -- /bin/sh        # 使用交互 shell 访问正在运行的 Pod (一个容器场景)
+kubectl exec my-pod -c my-container -- ls /         # 在已有的 Pod 中运行命令(多容器场景)
+kubectl top pod POD_NAME --containers               # 显示给定 Pod 和其中容器的监控数据
+kubectl top pod POD_NAME --sort-by=cpu              # 显示给定 Pod 的指标并且按照 'cpu' 或者 'memory' 排序
+```
+
+<!--
+## Interacting with Deployments and Services
+-->
+## 与 Deployments 和 Services 进行交互
+
+<!--
+```bash
+kubectl logs deploy/my-deployment                         # dump Pod logs for a Deployment (single-container case)
+kubectl logs deploy/my-deployment -c my-container         # dump Pod logs for a Deployment (multi-container case)
+
+kubectl port-forward svc/my-service 5000                  # listen on local port 5000 and forward to port 5000 on Service backend
+kubectl port-forward svc/my-service 5000:my-service-port  # listen on local port 5000 and forward to Service target port with name <my-service-port>
+
+kubectl port-forward deploy/my-deployment 5000:6000       # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment>
+kubectl exec deploy/my-deployment -- ls                   # run command in first Pod and first container in Deployment (single- or multi-container cases)
+```
+-->
+```bash
+kubectl logs deploy/my-deployment                         # 获取一个 Deployment 的 Pod 的日志(单容器例子)
+kubectl logs deploy/my-deployment -c my-container         # 获取一个 Deployment 的 Pod 的日志(多容器例子)
+
+kubectl port-forward svc/my-service 5000                  # 侦听本地端口 5000 并转发到 Service 后端端口 5000
+kubectl port-forward svc/my-service 5000:my-service-port  # 侦听本地端口 5000 并转发到名字为 <my-service-port> 的 Service 目标端口
+
+kubectl port-forward deploy/my-deployment 5000:6000       # 侦听本地端口 5000 并转发到 <my-deployment> 创建的 Pod 里的端口 6000
+kubectl exec deploy/my-deployment -- ls                   # 在 Deployment 里的第一个 Pod 的第一个容器里运行命令(单容器和多容器例子)
+```
+
+<!--
+## Interacting with Nodes and Cluster
+-->
+## 与节点和集群进行交互
+
+<!--
+```bash
+kubectl cordon my-node                                                # Mark my-node as unschedulable
+kubectl drain my-node                                                 # Drain my-node in preparation for maintenance
+kubectl uncordon my-node                                              # Mark my-node as schedulable
+kubectl top node my-node                                              # Show metrics for a given node
+kubectl cluster-info                                                  # Display addresses of the master and services
+kubectl cluster-info dump                                             # Dump current cluster state to stdout
+kubectl cluster-info dump --output-directory=/path/to/cluster-state   # Dump current cluster state to /path/to/cluster-state
+
+# If a taint with that key and effect already exists, its value is replaced as specified.
+kubectl taint nodes foo dedicated=special-user:NoSchedule
+```
+-->
+```bash
+kubectl cordon my-node                                                # 标记 my-node 节点为不可调度
+kubectl drain my-node                                                 # 对 my-node 节点进行清空操作,为节点维护做准备
+kubectl uncordon my-node                                              # 标记 my-node 节点为可以调度
+kubectl top node my-node                                              # 显示给定节点的度量值
+kubectl cluster-info                                                  # 显示主控节点和服务的地址
+kubectl cluster-info dump                                             # 将当前集群状态转储到标准输出
+kubectl cluster-info dump --output-directory=/path/to/cluster-state   # 将当前集群状态输出到 /path/to/cluster-state
+
+# 如果已存在具有指定键和效果的污点,则替换其值为指定值。
+kubectl taint nodes foo dedicated=special-user:NoSchedule
+```
+
+<!--
+### Resource types
+-->
+### 资源类型
+
+<!--
+List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
+-->
+列出所支持的全部资源类型和它们的简称、[API 组](/zh/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning), 是否是[名字空间作用域](/zh/docs/concepts/overview/working-with-objects/namespaces) 和 [Kind](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects)。
+
+```bash
+kubectl api-resources
+```
+
+<!--
+Other operations for exploring API resources:
+-->
+用于探索 API 资源的其他操作:
+
+<!--
+```bash
+kubectl api-resources --namespaced=true      # All namespaced resources
+kubectl api-resources --namespaced=false     # All non-namespaced resources
+kubectl api-resources -o name                # All resources with simple output (only the resource name)
+kubectl api-resources -o wide                # All resources with expanded (aka "wide") output
+kubectl api-resources --verbs=list,get       # All resources that support the "list" and "get" request verbs
+kubectl api-resources --api-group=extensions # All resources in the "extensions" API group
+```
+-->
+```bash
+kubectl api-resources --namespaced=true      # 所有命名空间作用域的资源
+kubectl api-resources --namespaced=false     # 所有非命名空间作用域的资源
+kubectl api-resources -o name                # 用简单格式列举所有资源(仅显示资源名称)
+kubectl api-resources -o wide                # 用扩展格式列举所有资源(又称 "wide" 格式)
+kubectl api-resources --verbs=list,get       # 支持 "list" 和 "get" 请求动词的所有资源
+kubectl api-resources --api-group=extensions # "extensions" API 组中的所有资源
+```
+
+<!--
+### Formatting output
+
+To output details to your terminal window in a specific format, add the `-o` (or `--output`) flag to a supported `kubectl` command.
+-->
+### 格式化输出
+
+要以特定格式将详细信息输出到终端窗口,将 `-o`(或者 `--output`)参数添加到支持的 `kubectl` 命令中。
+
+<!--O
+Output format | Description
+--------------| -----------
+`-o=custom-columns=<spec>` | Print a table using a comma separated list of custom columns
+`-o=custom-columns-file=<filename>` | Print a table using the custom columns template in the `<filename>` file
+`-o=json`     | Output a JSON formatted API object
+`-o=jsonpath=<template>` | Print the fields defined in a [jsonpath](/docs/reference/kubectl/jsonpath) expression
+`-o=jsonpath-file=<filename>` | Print the fields defined by the [jsonpath](/docs/reference/kubectl/jsonpath) expression in the `<filename>` file
+`-o=name`     | Print only the resource name and nothing else
+`-o=wide`     | Output in the plain-text format with any additional information, and for pods, the node name is included
+`-o=yaml`     | Output a YAML formatted API object
+-->
+输出格式      | 描述
+--------------| -----------
+`-o=custom-columns=<spec>` | 使用逗号分隔的自定义列来打印表格
+`-o=custom-columns-file=<filename>` | 使用 `<filename>` 文件中的自定义列模板打印表格
+`-o=json`     | 输出 JSON 格式的 API 对象
+`-o=jsonpath=<template>` | 打印 [jsonpath](/zh/docs/reference/kubectl/jsonpath) 表达式中定义的字段
+`-o=jsonpath-file=<filename>` | 打印在 `<filename>` 文件中定义的 [jsonpath](/zh/docs/reference/kubectl/jsonpath) 表达式所指定的字段。
+`-o=name`     | 仅打印资源名称而不打印其他内容
+`-o=wide`     | 以纯文本格式输出额外信息,对于 Pod 来说,输出中包含了节点名称
+`-o=yaml`     | 输出 YAML 格式的 API 对象
+
+<!--
+Examples using `-o=custom-columns`:
+
+```bash
+# All images running in a cluster
+kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
+
+# All images running in namespace: default, grouped by Pod
+kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"
+
+# All images excluding "k8s.gcr.io/coredns:1.6.2"
+kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'
+
+# All fields under metadata regardless of name
+kubectl get pods -A -o=custom-columns='DATA:metadata.*'
+
+More examples in the kubectl [reference documentation](/docs/reference/kubectl/overview/#custom-columns).
+```
+-->
+使用 `-o=custom-columns` 的示例:
+
+```bash
+# 集群中运行着的所有镜像
+kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
+
+# 列举 default 名字空间中运行的所有镜像,按 Pod 分组
+kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"
+
+# 除 "k8s.gcr.io/coredns:1.6.2" 之外的所有镜像
+kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'
+
+# 输出 metadata 下面的所有字段,无论 Pod 名字为何
+kubectl get pods -A -o=custom-columns='DATA:metadata.*'
+```
+
+有关更多示例,请参看 kubectl [参考文档](/zh/docs/reference/kubectl/overview/#custom-columns)。
+
+<!--
+### Kubectl output verbosity and debugging
+
+Kubectl verbosity is controlled with the `-v` or `--v` flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
+-->
+### Kubectl 日志输出详细程度和调试
+
+Kubectl 日志输出详细程度是通过 `-v` 或者 `--v` 来控制的,参数后跟一个数字表示日志的级别。
+Kubernetes 通用的日志习惯和相关的日志级别在
+[这里](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md) 有相应的描述。
+
+<!--
+Verbosity | Description
+--------------| -----------
+`--v=0` | Generally useful for this to *always* be visible to a cluster operator.
+`--v=1` | A reasonable default log level if you don't want verbosity.
+`--v=2` | Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.
+`--v=3` | Extended information about changes.
+`--v=4` | Debug level verbosity.
+`--v=5` | Trace level verbosity.
+`--v=6` | Display requested resources.
+`--v=7` | Display HTTP request headers.
+`--v=8` | Display HTTP request contents.
+`--v=9` | Display HTTP request contents without truncation of contents.
+-->
+详细程度      | 描述
+--------------| -----------
+`--v=0` | 用于那些应该 *始终* 对运维人员可见的信息,因为这些信息一般很有用。
+`--v=1` | 如果您不想要看到冗余信息,此值是一个合理的默认日志级别。
+`--v=2` | 输出有关服务的稳定状态的信息以及重要的日志消息,这些信息可能与系统中的重大变化有关。这是建议大多数系统设置的默认日志级别。
+`--v=3` | 包含有关系统状态变化的扩展信息。
+`--v=4` | 包含调试级别的冗余信息。
+`--v=5` | 跟踪级别的详细程度。
+`--v=6` | 显示所请求的资源。
+`--v=7` | 显示 HTTP 请求头。
+`--v=8` | 显示 HTTP 请求内容。
+`--v=9` | 显示 HTTP 请求内容而且不截断内容。
+
+## {{% heading "whatsnext" %}}
+
+<!--
+
+* Read the [kubectl overview](/docs/reference/kubectl/overview/) and learn about [JsonPath](/docs/reference/kubectl/jsonpath).
+
+* See [kubectl](/docs/reference/kubectl/kubectl/) options.
+
+* Also read [kubectl Usage Conventions](/docs/reference/kubectl/conventions/) to understand how to use kubectl in reusable scripts.
+
+* See more community [kubectl cheatsheets](https://github.com/dennyzhang/cheatsheet-kubernetes-A4).
+-->
+* 参阅 [kubectl 概述](/zh/docs/reference/kubectl/overview/),进一步了解[JsonPath](/zh/docs/reference/kubectl/jsonpath)。
+* 参阅 [kubectl](/zh/docs/reference/kubectl/kubectl/) 选项。
+* 参阅 [kubectl 使用约定](/zh/docs/reference/kubectl/conventions/)来理解如何在可复用的脚本中使用它。
+* 查看社区中其他的 [kubectl 备忘单](https://github.com/dennyzhang/cheatsheet-kubernetes-A4)。

+ 8 - 0
build/Dockerfile

@@ -0,0 +1,8 @@
+FROM hub.evbj.easou.com/pro/base-alpine:v1.0
+VOLUME /tmp
+ARG JAR_FILE
+COPY ${JAR_FILE} app.jar
+EXPOSE 8080
+#JAVA_OPTS="-server -Xmx2g -Xms2g -Xss512k -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom "
+#DEBUG_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8001,suspend=n "
+ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar -Dlogging.config=classpath:logback-k8s.xml -Dserver.port=8080 app.jar"]

+ 65 - 0
deploy-tmp-kibana.yaml

@@ -0,0 +1,65 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: kibana-svc
+  namespace: qa
+  labels:
+    app: kibana-svc
+spec:
+  ports:
+    - port: 80
+      name: client
+      targetPort: 5601
+  type: NodePort
+  selector:
+    app: kibana
+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+  name: kibana
+  namespace: qa
+  labels:
+    app: kibana
+spec:
+  serviceName: kibana
+  selector:
+    matchLabels:
+      app: kibana
+  template:
+    metadata:
+      labels:
+        app: kibana
+    spec:
+      terminationGracePeriodSeconds: 180
+      containers:
+        - name: kibana
+          image: docker.elastic.co/kibana/kibana:7.6.2
+          resources:
+            limits:
+              cpu: 1000m
+            requests:
+              cpu: 1000m
+          env:
+            - name: ELASTICSEARCH_HOSTS
+              value: http://elasticsearch:9200
+          ports:
+            - containerPort: 5601
+---
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+  name: kibana-ingress
+  namespace: qa
+  annotations:
+    kubernetes.io/ingress.class: nginx
+spec:
+  rules:
+    - host: qa-kibana.ieasou.cn
+      http:
+        paths:
+          - path: /
+            backend:
+              serviceName: kibana-svc
+              servicePort: 80

+ 93 - 0
drone.kibana.yml

@@ -0,0 +1,93 @@
+kind: pipeline
+type: kubernetes
+name: kibana
+
+clone:
+  disable: true
+
+metadata:
+  namespace: drone
+
+steps:
+  - name: clone
+    image: hub.evbj.easou.com/dev/alpine-git:20200622
+    commands:
+      - git clone $DRONE_GIT_HTTP_URL .
+
+  - name: build
+    image: plugins/docker
+    settings:
+      username: admin
+      password: Easou2)1*
+      insecure: true
+      mirror: https://ci7pm4nx.mirror.aliyuncs.com
+      registry: hub.evbj.easou.com
+      repo: hub.evbj.easou.com/qa/kibana
+      tag: v1.0.39
+  #    build_args:
+  #      - JAR_FILE=
+  #      - SERVER_PORT=8080
+
+  - name: deployment
+    image: hub.evbj.easou.com/dev/drone-agent:v2.0.0
+    pull: always
+    # privileged: true
+    environment:
+      JNLP_ENV: qa
+      JNLP_REPLICAS: 1
+      JNLP_TAG: v1.0
+      JNLP_VERSION: v1  # default v1, v2 for canary
+      DEPLOY_ENV: k8s-2 # 可以选择把应用部署到集群:k8s-1 or k8s-2
+      JNLP_REPO: hub.evbj.easou.com
+      JNLP_CONTAINER_PORT: 80
+      JNLP_INGRESS: nginx
+      JNLP_SVC_MODE: http
+      JNLP_STORAGE_CLASS: rbd
+      JNLP_DOMAIN: .ieasou.cn
+      JNLP_CONTROL: StatefulSet
+      JNLP_STORAGE_CAPACITY: 100Gi
+      JNLP_MOUNT_PATH: /data
+      JNLP_LIVENESS_INIT: 30
+      JNLP_LIVENESS_PER: 15
+      JNLP_READINESS_INIT: 30
+      JNLP_READINESS_PER: 15
+      JNLP_INGRESS_PUB: no
+    commands:
+      # 切换目标集群
+      - kubecm s $DEPLOY_ENV
+      # 生成配置YAML文件
+      - python3 /root/tools.py -b
+      # 部署服务到kubernetes上
+      - python3 /root/tools.py -a
+      # 检查服务部署状态
+      - python3 /root/tools.py -c
+      # 添加dns解析记录,生成访问域名
+      - python3 /root/tools.py -d
+
+  - name: success
+    image: hub.evbj.easou.com/dev/drone-wechat:20200622
+    settings:
+      corpid: ww419ee4063735e1c0
+      corp_secret: zpiRBLETH9eLwIMQ4eJ_r_dcm3BPSGeLHvTcft8Ot-M
+      agent_id: 1000004
+      title: "Pipeline qa/kibana:v1.0 Success"
+      description: "${DRONE_BUILD_LINK} qa/kibana:v1.0 部署完成"
+      msg_url: ${DRONE_BUILD_LINK}
+      btn_txt: "否"
+    when:
+      status:
+        - success
+
+  - name: failure
+    image: hub.evbj.easou.com/dev/drone-wechat:20200622
+    settings:
+      corpid: ww419ee4063735e1c0
+      corp_secret: zpiRBLETH9eLwIMQ4eJ_r_dcm3BPSGeLHvTcft8Ot-M
+      agent_id: 1000004
+      title: "Pipeline qa/kibana:v1.0 Failure"
+      description: "${DRONE_BUILD_LINK} 部署失败,请检查配置!"
+      msg_url: ${DRONE_BUILD_LINK}
+      btn_txt: "否"
+    when:
+      status:
+        - failure

+ 79 - 0
k8s-deploy.sh

@@ -0,0 +1,79 @@
+#!/bin/bash -c
+app_names=("zookeeper" "kibana" "elasticsearch")
+envs=("qa" "pro")
+
+#当前目录
+DEPLOY_HOME=$(
+  cd $(dirname $0)
+  pwd
+)
+
+echo "----------------------------------"
+echo "选择要部署的应用:"
+echo "(0) zookeeper"
+echo "(1) kibana"
+echo "(2) elasticsearch"
+echo "----------------------------------"
+while [ 1 ]; do
+  read input1
+  if ! [[ $input1 =~ ^[0-9](,[0-9])*$ ]]; then
+    echo "请输入[0-2]之间的数字:"
+    continue
+  fi
+
+  if [  $input1 -ge 0 -a $input1 -le 2 ]; then
+    currApp=${app_names[$input1]}
+    break
+  else
+    echo "请输入[0-2]之间的数字:"
+  fi
+done
+
+echo "----------------------------------"
+echo "运行环境:"
+echo "(0) qa"
+echo "(1) pro"
+echo "----------------------------------"
+while [ 1 ]; do
+  read input1
+  if ! [[ $input1 =~ ^[0-9](,[0-9])*$ ]]; then
+    echo "请输入[0-1]之间的数字:"
+    continue
+  fi
+
+  if [ $input1 != 1 -o $input1 -lt 0 -o $input1 -gt 2 ]; then
+    ENV=${envs[$input1]}
+    break
+  else
+    echo "请输入[0-1]之间的数字:"
+  fi
+done
+
+echo "----------------------------------"
+echo "最终要部署的应用${currApp}"
+echo "环境${ENV}"
+echo "----------------------------------"
+
+cd $DEPLOY_HOME || exit
+rm -rf .tmp && mkdir .tmp && cd .tmp || exit
+
+git config --global user.email "yangxg_yang@easou.cn"
+git config --global user.name "${USER}"
+git config --global credential.helper store
+
+
+git clone http://yangxg:yangtxy1256@gogs.ieasou.cn/yangxg/${currApp}.git ${currApp} && cd ${currApp} || exit
+cp -f ../../deploy-tmp-${currApp}.yaml deploy-tmp.yaml
+cp -f ../../drone.${currApp}.yml .drone.yml
+cp -f ../../Dockerfile-${currApp} Dockerfile
+
+git add deploy-tmp.yaml
+git add .drone.yml
+git add Dockerfile
+echo "git commit"
+git commit --allow-empty -m "deploy $ENV/${currApp}:1.0"
+echo "git push"
+git push origin master
+
+cd $DEPLOY_HOME || exit
+rm -rf .tmp