Kubernetes The Hard Way
- Practice following Kubernetes The Hard Way
- Good Explanation Kubernetes The Hard Way Explained
Step 1 Google Cloud SDK
zhanming.cui@ITs-MacBook-Pro:~/Downloads/google-cloud-sdk|
⇒ ./install.sh
Welcome to the Google Cloud SDK!
To help improve the quality of this product, we collect anonymized usage data
and anonymized stacktraces when crashes are encountered; additional information
is available at <https://cloud.google.com/sdk/usage-statistics>. You may choose
to opt out of this collection now (by choosing 'N' at the below prompt), or at
any time in the future by running the following command:
gcloud config set disable_usage_reporting true
Do you want to help improve the Google Cloud SDK (Y/n)? n
Your current Cloud SDK version is: 180.0.1
The latest available version is: 180.0.1
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Components │
├───────────────┬──────────────────────────────────────────────────────┬──────────────────────────┬───────────┤
│ Status │ Name │ ID │ Size │
├───────────────┼──────────────────────────────────────────────────────┼──────────────────────────┼───────────┤
│ Not Installed │ App Engine Go Extensions │ app-engine-go │ 97.7 MiB │
│ Not Installed │ Cloud Bigtable Command Line Tool │ cbt │ 4.0 MiB │
│ Not Installed │ Cloud Bigtable Emulator │ bigtable │ 3.5 MiB │
│ Not Installed │ Cloud Datalab Command Line Tool │ datalab │ < 1 MiB │
│ Not Installed │ Cloud Datastore Emulator │ cloud-datastore-emulator │ 17.7 MiB │
│ Not Installed │ Cloud Datastore Emulator (Legacy) │ gcd-emulator │ 38.1 MiB │
│ Not Installed │ Cloud Pub/Sub Emulator │ pubsub-emulator │ 33.2 MiB │
│ Not Installed │ Emulator Reverse Proxy │ emulator-reverse-proxy │ 14.5 MiB │
│ Not Installed │ Google Container Local Builder │ container-builder-local │ 3.7 MiB │
│ Not Installed │ Google Container Registry's Docker credential helper │ docker-credential-gcr │ 2.2 MiB │
│ Not Installed │ gcloud Alpha Commands │ alpha │ < 1 MiB │
│ Not Installed │ gcloud Beta Commands │ beta │ < 1 MiB │
│ Not Installed │ gcloud app Java Extensions │ app-engine-java │ 118.4 MiB │
│ Not Installed │ gcloud app PHP Extensions │ app-engine-php │ 21.9 MiB │
│ Not Installed │ gcloud app Python Extensions │ app-engine-python │ 6.2 MiB │
│ Not Installed │ kubectl │ kubectl │ 12.2 MiB │
│ Installed │ BigQuery Command Line Tool │ bq │ < 1 MiB │
│ Installed │ Cloud SDK Core Libraries │ core │ 7.5 MiB │
│ Installed │ Cloud Storage Command Line Tool │ gsutil │ 3.3 MiB │
└───────────────┴──────────────────────────────────────────────────────┴──────────────────────────┴───────────┘
To install or remove components at your current SDK version [180.0.1], run:
$ gcloud components install COMPONENT_ID
$ gcloud components remove COMPONENT_ID
To update your SDK installation to the latest version [180.0.1], run:
$ gcloud components update
==> Source [/Users/zhanming.cui/Downloads/google-cloud-sdk/completion.bash.inc] in your profile to enable shell command completion for gcloud.
==> Source [/Users/zhanming.cui/Downloads/google-cloud-sdk/path.bash.inc] in your profile to add the Google Cloud SDK command line tools to your $PATH.
For more information on how to get started, please visit:
https://cloud.google.com/sdk/docs/quickstarts
Step 2 Local client tools - cfssl, cfssljson, and kubectl.
CFSSL: Cloudflare’s PKI and TLS toolkit
zhanming.cui@ITs-MacBook-Pro:~|⇒ curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9.8M 100 9.8M 0 0 1544k 0 0:00:06 0:00:06 --:--:-- 2264k
zhanming.cui@ITs-MacBook-Pro:~|⇒ curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2176k 100 2176k 0 0 844k 0 0:00:02 0:00:02 --:--:-- 844k
zhanming.cui@ITs-MacBook-Pro:~|⇒ chmod +x cfssl cfssljson
zhanming.cui@ITs-MacBook-Pro:~|⇒ sudo mv cfssl cfssljson /usr/local/bin/
Password:
zhanming.cui@ITs-MacBook-Pro:~|⇒ cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6
zhanming.cui@ITs-MacBook-Pro:~|⇒ curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/darwin/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 49.4M 100 49.4M 0 0 2722k 0 0:00:18 0:00:18 --:--:-- 2827k
zhanming.cui@ITs-MacBook-Pro:~|⇒ chmod +x kubectl
zhanming.cui@ITs-MacBook-Pro:~|⇒ sudo mv kubectl /usr/local/bin/
zhanming.cui@ITs-MacBook-Pro:~|⇒ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Step 3 Provisioning Compute Resources
K8s Network Load Balancing for external remote clients
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute networks create csihome-kubernetes-hard-way-vpc-network --mode custom
WARNING: mode is deprecated. Please use subnet-mode instead.
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/networks/csihome-kubernetes-hard-way-vpc-network].
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
csihome-kubernetes-hard-way-vpc-network CUSTOM REGIONAL
Instances on this network will not be reachable until firewall rules
are created. As an example, you can allow all internal traffic between
instances as well as SSH, RDP, and ICMP by running:
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network csihome-kubernetes-hard-way-vpc-network --allow tcp,udp,icmp --source-ranges <IP_RANGE>
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network csihome-kubernetes-hard-way-vpc-network --allow tcp:22,tcp:3389,icmp
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute networks subnets create csihome-kubernetes-hard-way-vpc-subnet \
--network csihome-kubernetes-hard-way-vpc-network \
--range 10.240.0.0/24
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/subnetworks/csihome-kubernetes-hard-way-vpc-subnet].
NAME REGION NETWORK RANGE
csihome-kubernetes-hard-way-vpc-subnet us-west1 csihome-kubernetes-hard-way-vpc-network 10.240.0.0/24
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute firewall-rules create csihome-kubernetes-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network csihome-kubernetes-hard-way-vpc-network \
--source-ranges 10.240.0.0/24,10.200.0.0/16
Creating firewall...\Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/firewalls/csihome-kubernetes-hard-way-allow-internal].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
csihome-kubernetes-hard-way-allow-internal csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp,udp,icmp
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute firewall-rules create csihome-kubernetes-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network csihome-kubernetes-hard-way-vpc-network \
--source-ranges 0.0.0.0/0
Creating firewall...|Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/firewalls/csihome-kubernetes-hard-way-allow-external].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
csihome-kubernetes-hard-way-allow-external csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp:22,tcp:6443,icmp
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
csihome-kubernetes-hard-way-allow-external csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp:22,tcp:6443,icmp
csihome-kubernetes-hard-way-allow-internal csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp,udp,icmp
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute firewall-rules list --filter "network: csihome-kubernetes-hard-way-vpc-network"
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
csihome-kubernetes-hard-way-allow-external csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp:22,tcp:6443,icmp
csihome-kubernetes-hard-way-allow-internal csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp,udp,icmp
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute addresses create csihome-kubernetes-hard-way-public-ip --region $(gcloud config get-value compute/region)
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/addresses/csihome-kubernetes-hard-way-public-ip].
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute addresses list --filter="name=('csihome-kubernetes-hard-way-public-ip')"
NAME REGION ADDRESS STATUS
csihome-kubernetes-hard-way-public-ip us-west1 104.198.96.15 RESERVED
zhanming.cui@ITs-MacBook-Pro:~|⇒
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet csihome-kubernetes-hard-way-vpc-subnet \
--tags csihome-kubernetes-hard-way,controller
done
Instance creation in progress for [controller-0]: https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/operations/operation-1512474236262-55f9659959670-b700e8be-71680c21
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
Instance creation in progress for [controller-1]: https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/operations/operation-1512474238297-55f9659b4a3a9-5163af4b-83322e15
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
Instance creation in progress for [controller-2]: https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/operations/operation-1512474240592-55f9659d7a881-749cd376-2e7fd332
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
zhanming.cui@ITs-MacBook-Pro:~|⇒
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet csihome-kubernetes-hard-way-vpc-subnet \
--tags csihome-kubernetes-hard-way,worker
done
Instance creation in progress for [worker-0]: https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/operations/operation-1512474368260-55f966173b7a1-e407a310-34c39758
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
Instance creation in progress for [worker-1]: https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/operations/operation-1512474370308-55f966192f7a0-72e0b6b0-e1d0ebf0
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
Instance creation in progress for [worker-2]: https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/operations/operation-1512474372610-55f9661b617d1-02fce5d0-de297aed
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
4. Provisioning a CA and Generating TLS Certificates
zhanming.cui@ITs-MacBook-Pro:~|⇒ cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
{
}
EOF
zhanming.cui@ITs-MacBook-Pro:~|⇒ cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Ireland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
zhanming.cui@ITs-MacBook-Pro:~|⇒ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2017/12/05 12:58:05 [INFO] generating a new CA key and certificate from CSR
2017/12/05 12:58:05 [INFO] generate received request
2017/12/05 12:58:05 [INFO] received CSR
2017/12/05 12:58:05 [INFO] generating key: rsa-2048
2017/12/05 12:58:05 [INFO] encoded CSR
2017/12/05 12:58:05 [INFO] signed certificate with serial number 473454518536420911696746876058919207616070185911
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll ca*
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 232B 5 Dec 11:52 ca-config.json
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 210B 5 Dec 12:57 ca-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 12:58 ca-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 12:58 ca.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.3K 5 Dec 12:58 ca.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Ireland",
"O": "system:masters",
"OU": "CSiHome Kubernetes Hard Way",
"ST": "Oregon"
}
]
}
EOF
zhanming.cui@ITs-MacBook-Pro:~|⇒ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
2017/12/05 13:01:48 [INFO] generate received request
2017/12/05 13:01:48 [INFO] received CSR
2017/12/05 13:01:48 [INFO] generating key: rsa-2048
2017/12/05 13:01:48 [INFO] encoded CSR
2017/12/05 13:01:48 [INFO] signed certificate with serial number 382419483808196200761151533934940841005303259361
2017/12/05 13:01:48 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll admin*
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 234B 5 Dec 13:01 admin-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:01 admin-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:01 admin.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.4K 5 Dec 13:01 admin.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Ireland",
"O": "system:nodes",
"OU": "CSiHome Kubernetes Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
INTERNAL_IP=$(gcloud compute instances describe ${instance} --format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
2017/12/05 13:05:04 [INFO] generate received request
2017/12/05 13:05:04 [INFO] received CSR
2017/12/05 13:05:04 [INFO] generating key: rsa-2048
2017/12/05 13:05:04 [INFO] encoded CSR
2017/12/05 13:05:04 [INFO] signed certificate with serial number 701134318840789870925870541482261884496709965056
2017/12/05 13:05:06 [INFO] generate received request
2017/12/05 13:05:06 [INFO] received CSR
2017/12/05 13:05:06 [INFO] generating key: rsa-2048
2017/12/05 13:05:06 [INFO] encoded CSR
2017/12/05 13:05:06 [INFO] signed certificate with serial number 328127926199457124178739188180484059607871046106
2017/12/05 13:05:08 [INFO] generate received request
2017/12/05 13:05:08 [INFO] received CSR
2017/12/05 13:05:08 [INFO] generating key: rsa-2048
2017/12/05 13:05:08 [INFO] encoded CSR
2017/12/05 13:05:08 [INFO] signed certificate with serial number 165534030365814208958761022058453229783467903986
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll worker*
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 247B 5 Dec 13:05 worker-0-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:05 worker-0-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:05 worker-0.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.5K 5 Dec 13:05 worker-0.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 247B 5 Dec 13:05 worker-1-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:05 worker-1-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:05 worker-1.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.5K 5 Dec 13:05 worker-1.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 247B 5 Dec 13:05 worker-2-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:05 worker-2-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:05 worker-2.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.5K 5 Dec 13:05 worker-2.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Ireland",
"O": "system:node-proxier",
"OU": "CSiHome Kubernetes Hard Way",
"ST": "Oregon"
}
]
}
EOF
zhanming.cui@ITs-MacBook-Pro:~|⇒ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
2017/12/05 13:07:07 [INFO] generate received request
2017/12/05 13:07:07 [INFO] received CSR
2017/12/05 13:07:07 [INFO] generating key: rsa-2048
2017/12/05 13:07:08 [INFO] encoded CSR
2017/12/05 13:07:08 [INFO] signed certificate with serial number 655208721038723377281714771575570498196232365668
2017/12/05 13:07:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll kube-proxy*
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 251B 5 Dec 13:06 kube-proxy-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:07 kube-proxy-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:07 kube-proxy.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.4K 5 Dec 13:07 kube-proxy.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe csihome-kubernetes-hard-way-public-ip \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
zhanming.cui@ITs-MacBook-Pro:~|⇒ echo $KUBERNETES_PUBLIC_ADDRESS
104.198.96.15
zhanming.cui@ITs-MacBook-Pro:~|⇒ cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Ireland",
"O": "Kubernetes",
"OU": "CSiHome Kubernetes Hard Way",
"ST": "Oregon"
}
]
}
EOF
zhanming.cui@ITs-MacBook-Pro:~|⇒ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
2017/12/05 13:10:47 [INFO] generate received request
2017/12/05 13:10:47 [INFO] received CSR
2017/12/05 13:10:47 [INFO] generating key: rsa-2048
2017/12/05 13:10:47 [INFO] encoded CSR
2017/12/05 13:10:47 [INFO] signed certificate with serial number 378149102979566650655720981774879335903725932334
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll kubernetes*
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 235B 5 Dec 13:10 kubernetes-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:10 kubernetes-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:10 kubernetes.csr
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.5K 5 Dec 13:10 kubernetes.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/zhanming.cui/.ssh/google_compute_engine.
Your public key has been saved in /Users/zhanming.cui/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:+KxAq/OMtQM0F/SVSnOceYyCKIf8ZCNAezs+34Z33JU zhanming.cui@ITs-MacBook-Pro
The key's randomart image is:
+---[RSA 2048]----+
|=o o.. ..* |
|oo++o.+.B o |
| +=..o.= . |
| +.o .. |
| . =. . S . |
| o... o E |
| ++ . + . . |
| .=++.+ o . |
| oo+o+.. |
+----[SHA256]-----+
Updating project ssh metadata...|Updated [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.2993069986757163503' (ECDSA) to the list of known hosts.
ca.pem 100% 1363 1.3KB/s 00:00
worker-0-key.pem 100% 1675 1.6KB/s 00:00
worker-0.pem 100% 1497 1.5KB/s 00:00
Warning: Permanently added 'compute.751477201505565165' (ECDSA) to the list of known hosts.
ca.pem 100% 1363 1.3KB/s 00:00
worker-1-key.pem 100% 1679 1.6KB/s 00:00
worker-1.pem 100% 1497 1.5KB/s 00:00
Warning: Permanently added 'compute.7142377151995263466' (ECDSA) to the list of known hosts.
ca.pem 100% 1363 1.3KB/s 00:00
worker-2-key.pem 100% 1679 1.6KB/s 00:00
worker-2.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
done
Warning: Permanently added 'compute.9084268665520208019' (ECDSA) to the list of known hosts.
ca.pem 100% 1363 1.3KB/s 00:00
ca-key.pem 100% 1675 1.6KB/s 00:00
kubernetes-key.pem 100% 1675 1.6KB/s 00:00
kubernetes.pem 100% 1525 1.5KB/s 00:00
Warning: Permanently added 'compute.4914927260257695889' (ECDSA) to the list of known hosts.
ca.pem 100% 1363 1.3KB/s 00:00
ca-key.pem 100% 1675 1.6KB/s 00:00
kubernetes-key.pem 100% 1675 1.6KB/s 00:00
kubernetes.pem 100% 1525 1.5KB/s 00:00
Warning: Permanently added 'compute.7410645559537252463' (ECDSA) to the list of known hosts.
ca.pem 100% 1363 1.3KB/s 00:00
ca-key.pem 100% 1675 1.6KB/s 00:00
kubernetes-key.pem 100% 1675 1.6KB/s 00:00
kubernetes.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe csihome-kubernetes-hard-way-public-ip \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
zhanming.cui@ITs-MacBook-Pro:~|⇒ for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster csihome-kubernetes-hard-way-worker-cluster \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=csihome-kubernetes-hard-way-worker-cluster \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
Cluster "csihome-kubernetes-hard-way-worker-cluster" set.
User "system:node:worker-0" set.
Context "default" created.
Switched to context "default".
Cluster "csihome-kubernetes-hard-way-worker-cluster" set.
User "system:node:worker-1" set.
Context "default" created.
Switched to context "default".
Cluster "csihome-kubernetes-hard-way-worker-cluster" set.
User "system:node:worker-2" set.
Context "default" created.
Switched to context "default".
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll worker*.kubeconfig
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 6.3K 5 Dec 13:22 worker-0.kubeconfig
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 6.3K 5 Dec 13:22 worker-1.kubeconfig
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 6.3K 5 Dec 13:22 worker-2.kubeconfig
zhanming.cui@ITs-MacBook-Pro:~|⇒ kubectl config set-cluster csihome-kubernetes-hard-way-worker-cluster \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
zhanming.cui@ITs-MacBook-Pro:~|⇒ kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
zhanming.cui@ITs-MacBook-Pro:~|⇒ kubectl config set-context default \
--cluster=csihome-kubernetes-hard-way-worker-cluster \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
zhanming.cui@ITs-MacBook-Pro:~|⇒ kubectl config use-context default \
--kubeconfig=kube-proxy.kubeconfig
Cluster "csihome-kubernetes-hard-way-worker-cluster" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
zhanming.cui@ITs-MacBook-Pro:~|⇒ ll *proxy*
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 251B 5 Dec 13:06 kube-proxy-csr.json
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.6K 5 Dec 13:07 kube-proxy-key.pem
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.0K 5 Dec 13:07 kube-proxy.csr
-rw------- 1 zhanming.cui SYNCHRONOSS\Domain Users 6.3K 5 Dec 13:25 kube-proxy.kubeconfig
-rw-r--r-- 1 zhanming.cui SYNCHRONOSS\Domain Users 1.4K 5 Dec 13:07 kube-proxy.pem
zhanming.cui@ITs-MacBook-Pro:~|⇒ for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
worker-0.kubeconfig 100% 6488 6.3KB/s 00:00
kube-proxy.kubeconfig 100% 6420 6.3KB/s 00:00
worker-1.kubeconfig 100% 6492 6.3KB/s 00:00
kube-proxy.kubeconfig 100% 6420 6.3KB/s 00:00
worker-2.kubeconfig 100% 6492 6.3KB/s 00:00
kube-proxy.kubeconfig 100% 6420 6.3KB/s 00:00
6. Generating the Data Encryption Config and Key
zhanming.cui@ITs-MacBook-Pro:~|⇒ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
zhanming.cui@ITs-MacBook-Pro:~|⇒ cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
zhanming.cui@ITs-MacBook-Pro:~|⇒ for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done
7. Bootstrapping the etcd Cluster
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute ssh controller-0
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-40-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
_____________________________________________________________________
WARNING! Your environment specifies an invalid locale.
The unknown environment variables are:
LC_CTYPE=en_IE.UTF-8 LC_ALL=
This can affect your user experience significantly, including the
ability to manage packages. You may install the locales by running:
sudo apt-get install language-pack-en
or
sudo locale-gen en_IE.UTF-8
To see all available language packs, run:
apt-cache search "^language-pack-[a-z][a-z]$"
To disable this message for all users, run:
sudo touch /var/lib/cloud/instance/locale-check.skip
_____________________________________________________________________
zhanming.cui@controller-0:~$ wget -q --show-progress --https-only --timestamping "https://github.com/coreos/etcd/releases/download/v3.2.8/etcd-v3.2.8-linux-amd64.tar.gz"
etcd-v3.2.8-linux-amd64.tar.gz 100%[=========================================================================================================================================>] 9.71M 2.83MB/s in 3.4s
zhanming.cui@controller-0:~$ tar -xvf etcd-v3.2.8-linux-amd64.tar.gz
etcd-v3.2.8-linux-amd64/
etcd-v3.2.8-linux-amd64/Documentation/
etcd-v3.2.8-linux-amd64/Documentation/tuning.md
etcd-v3.2.8-linux-amd64/Documentation/rfc/
etcd-v3.2.8-linux-amd64/Documentation/rfc/v3api.md
etcd-v3.2.8-linux-amd64/Documentation/dl_build.md
etcd-v3.2.8-linux-amd64/Documentation/metrics.md
etcd-v3.2.8-linux-amd64/Documentation/v2/
etcd-v3.2.8-linux-amd64/Documentation/v2/runtime-configuration.md
etcd-v3.2.8-linux-amd64/Documentation/v2/admin_guide.md
etcd-v3.2.8-linux-amd64/Documentation/v2/tuning.md
etcd-v3.2.8-linux-amd64/Documentation/v2/glossary.md
etcd-v3.2.8-linux-amd64/Documentation/v2/rfc/
etcd-v3.2.8-linux-amd64/Documentation/v2/rfc/v3api.md
etcd-v3.2.8-linux-amd64/Documentation/v2/discovery_protocol.md
etcd-v3.2.8-linux-amd64/Documentation/v2/errorcode.md
etcd-v3.2.8-linux-amd64/Documentation/v2/metrics.md
etcd-v3.2.8-linux-amd64/Documentation/v2/security.md
etcd-v3.2.8-linux-amd64/Documentation/v2/configuration.md
etcd-v3.2.8-linux-amd64/Documentation/v2/docker_guide.md
etcd-v3.2.8-linux-amd64/Documentation/v2/dev/
etcd-v3.2.8-linux-amd64/Documentation/v2/dev/release.md
etcd-v3.2.8-linux-amd64/Documentation/v2/members_api.md
etcd-v3.2.8-linux-amd64/Documentation/v2/auth_api.md
etcd-v3.2.8-linux-amd64/Documentation/v2/backward_compatibility.md
etcd-v3.2.8-linux-amd64/Documentation/v2/etcd_alert.rules
etcd-v3.2.8-linux-amd64/Documentation/v2/platforms/
etcd-v3.2.8-linux-amd64/Documentation/v2/platforms/freebsd.md
etcd-v3.2.8-linux-amd64/Documentation/v2/libraries-and-tools.md
etcd-v3.2.8-linux-amd64/Documentation/v2/implementation-faq.md
etcd-v3.2.8-linux-amd64/Documentation/v2/reporting_bugs.md
etcd-v3.2.8-linux-amd64/Documentation/v2/upgrade_2_2.md
etcd-v3.2.8-linux-amd64/Documentation/v2/internal-protocol-versioning.md
etcd-v3.2.8-linux-amd64/Documentation/v2/README.md
etcd-v3.2.8-linux-amd64/Documentation/v2/upgrade_2_1.md
etcd-v3.2.8-linux-amd64/Documentation/v2/faq.md
etcd-v3.2.8-linux-amd64/Documentation/v2/api_v3.md
etcd-v3.2.8-linux-amd64/Documentation/v2/runtime-reconf-design.md
etcd-v3.2.8-linux-amd64/Documentation/v2/clustering.md
etcd-v3.2.8-linux-amd64/Documentation/v2/proxy.md
etcd-v3.2.8-linux-amd64/Documentation/v2/branch_management.md
etcd-v3.2.8-linux-amd64/Documentation/v2/other_apis.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-2-2-0-rc-memory-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-3-demo-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-2-2-0-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-2-1-0-alpha-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-storage-memory-benchmark.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/README.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-2-2-0-rc-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/v2/benchmarks/etcd-3-watch-memory-benchmark.md
etcd-v3.2.8-linux-amd64/Documentation/v2/api.md
etcd-v3.2.8-linux-amd64/Documentation/v2/authentication.md
etcd-v3.2.8-linux-amd64/Documentation/v2/04_to_2_snapshot_migration.md
etcd-v3.2.8-linux-amd64/Documentation/v2/upgrade_2_3.md
etcd-v3.2.8-linux-amd64/Documentation/v2/production-users.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/grpc_naming.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/interacting_v3.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/api_reference_v3.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/local_cluster.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/limit.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/experimental_apis.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/apispec/
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/apispec/swagger/
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/apispec/swagger/rpc.swagger.json
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/apispec/swagger/v3election.swagger.json
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/apispec/swagger/v3lock.swagger.json
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/api_concurrency_reference_v3.md
etcd-v3.2.8-linux-amd64/Documentation/dev-guide/api_grpc_gateway.md
etcd-v3.2.8-linux-amd64/Documentation/docs.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/
etcd-v3.2.8-linux-amd64/Documentation/op-guide/runtime-configuration.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/failures.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/grpc_proxy.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/gateway.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/maintenance.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/recovery.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/performance.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/security.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/configuration.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/supported-platform.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/etcd-sample-grafana.png
etcd-v3.2.8-linux-amd64/Documentation/op-guide/grafana.json
etcd-v3.2.8-linux-amd64/Documentation/op-guide/v2-migration.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/hardware.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/etcd3_alert.rules
etcd-v3.2.8-linux-amd64/Documentation/op-guide/runtime-reconf-design.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/clustering.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/container.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/versioning.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/monitoring.md
etcd-v3.2.8-linux-amd64/Documentation/op-guide/authentication.md
etcd-v3.2.8-linux-amd64/Documentation/dev-internal/
etcd-v3.2.8-linux-amd64/Documentation/dev-internal/discovery_protocol.md
etcd-v3.2.8-linux-amd64/Documentation/dev-internal/release.md
etcd-v3.2.8-linux-amd64/Documentation/dev-internal/logging.md
etcd-v3.2.8-linux-amd64/Documentation/platforms/
etcd-v3.2.8-linux-amd64/Documentation/platforms/freebsd.md
etcd-v3.2.8-linux-amd64/Documentation/platforms/container-linux-systemd.md
etcd-v3.2.8-linux-amd64/Documentation/platforms/aws.md
etcd-v3.2.8-linux-amd64/Documentation/reporting_bugs.md
etcd-v3.2.8-linux-amd64/Documentation/integrations.md
etcd-v3.2.8-linux-amd64/Documentation/README.md
etcd-v3.2.8-linux-amd64/Documentation/demo.md
etcd-v3.2.8-linux-amd64/Documentation/faq.md
etcd-v3.2.8-linux-amd64/Documentation/branch_management.md
etcd-v3.2.8-linux-amd64/Documentation/upgrades/
etcd-v3.2.8-linux-amd64/Documentation/upgrades/upgrade_3_1.md
etcd-v3.2.8-linux-amd64/Documentation/upgrades/upgrade_3_2.md
etcd-v3.2.8-linux-amd64/Documentation/upgrades/upgrade_3_0.md
etcd-v3.2.8-linux-amd64/Documentation/upgrades/upgrading-etcd.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-2-2-0-rc-memory-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-3-demo-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-2-2-0-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-2-1-0-alpha-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-storage-memory-benchmark.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/README.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-2-2-0-rc-benchmarks.md
etcd-v3.2.8-linux-amd64/Documentation/benchmarks/etcd-3-watch-memory-benchmark.md
etcd-v3.2.8-linux-amd64/Documentation/production-users.md
etcd-v3.2.8-linux-amd64/Documentation/learning/
etcd-v3.2.8-linux-amd64/Documentation/learning/why.md
etcd-v3.2.8-linux-amd64/Documentation/learning/glossary.md
etcd-v3.2.8-linux-amd64/Documentation/learning/data_model.md
etcd-v3.2.8-linux-amd64/Documentation/learning/auth_design.md
etcd-v3.2.8-linux-amd64/Documentation/learning/api_guarantees.md
etcd-v3.2.8-linux-amd64/Documentation/learning/api.md
etcd-v3.2.8-linux-amd64/README-etcdctl.md
etcd-v3.2.8-linux-amd64/etcdctl
etcd-v3.2.8-linux-amd64/etcd
etcd-v3.2.8-linux-amd64/README.md
etcd-v3.2.8-linux-amd64/READMEv2-etcdctl.md
zhanming.cui@controller-0:~$ sudo mv etcd-v3.2.8-linux-amd64/etcd* /usr/local/bin/
zhanming.cui@controller-0:~$ sudo mkdir -p /etc/etcd /var/lib/etcd
zhanming.cui@controller-0:~$ sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
zhanming.cui@controller-0:~$ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
zhanming.cui@controller-0:~$ echo $INTERNAL_IP
10.240.0.10
zhanming.cui@controller-0:~$ ETCD_NAME=$(hostname -s)
zhanming.cui@controller-0:~$ echo $ETCD_NAME
controller-0
zhanming.cui@controller-0:~$ cat > etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
zhanming.cui@controller-0:~$ sudo mv etcd.service /etc/systemd/system
zhanming.cui@controller-0:~$ sudo systemctl daemon-reload
zhanming.cui@controller-0:~$ sudo systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
zhanming.cui@controller-0:~$ sudo systemctl start etcd
zhanming.cui@controller-2:~$ ETCDCTL_API=3 etcdctl member list
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
8. Bootstrapping the Kubernetes Control Plane
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute ssh controller-0
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-40-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Tue Dec 5 13:58:47 2017 from 87.198.172.217
zhanming.cui@controller-0:~$ wget -q --show-progress --https-only --timestamping \
> "https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-apiserver" \
> "https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-controller-manager" \
> "https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-scheduler" \
> "https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl"
kube-apiserver 100%[=========================================================================================================================================>] 183.97M 234MB/s in 0.8s
kube-controller-manager 100%[=========================================================================================================================================>] 122.15M 252MB/s in 0.5s
kube-scheduler 100%[=========================================================================================================================================>] 51.26M 156MB/s in 0.3s
kubectl 100%[=========================================================================================================================================>] 49.84M 218MB/s in 0.2s
zhanming.cui@controller-0:~$ chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
zhanming.cui@controller-0:~$ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
zhanming.cui@controller-0:~$ sudo mkdir -p /var/lib/kubernetes/
zhanming.cui@controller-0:~$ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
zhanming.cui@controller-0:~$ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
> http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
zhanming.cui@controller-0:~$ cat > kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--insecure-bind-address=127.0.0.1 \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
zhanming.cui@controller-0:~$ cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
zhanming.cui@controller-0:~$ cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
zhanming.cui@controller-0:~$ sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
zhanming.cui@controller-0:~$ sudo systemctl daemon-reload
zhanming.cui@controller-0:~$ sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /etc/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /etc/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /etc/systemd/system/kube-scheduler.service.
zhanming.cui@controller-0:~$ sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
zhanming.cui@controller-0:~$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute ssh controller-0
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-40-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Tue Dec 5 14:25:19 2017 from 87.198.172.217
zhanming.cui@controller-0:~$ cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
clusterrole "system:kube-apiserver-to-kubelet" created
zhanming.cui@controller-0:~$ cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
clusterrolebinding "system:kube-apiserver" created
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute target-pools create csihome-kubernetes-hard-way-lb-pool
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/targetPools/csihome-kubernetes-hard-way-lb-pool].
NAME REGION SESSION_AFFINITY BACKUP HEALTH_CHECKS
csihome-kubernetes-hard-way-lb-pool us-west1 NONE
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute target-pools add-instances csihome-kubernetes-hard-way-lb-pool --instances controller-0,controller-1,controller-2
Updated [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/targetPools/csihome-kubernetes-hard-way-lb-pool].
zhanming.cui@ITs-MacBook-Pro:~|⇒ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe csihome-kubernetes-hard-way-public-ip \
--region $(gcloud config get-value compute/region) \
--format 'value(name)')
zhanming.cui@ITs-MacBook-Pro:~|⇒ echo $KUBERNETES_PUBLIC_ADDRESS
csihome-kubernetes-hard-way-public-ip
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute forwarding-rules create csihome-kubernetes-hard-way-forwarding-rule --address ${KUBERNETES_PUBLIC_ADDRESS} --ports 6443 --region $(gcloud config get-value compute/region) --target-pool csihome-kubernetes-hard-way-lb-pool
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/forwardingRules/csihome-kubernetes-hard-way-forwarding-rule].
zhanming.cui@ITs-MacBook-Pro:~|⇒ curl --cacert ca.pem https://$KUBERNETES_PUBLIC_ADDRESS:6443/version
{
"major": "1",
"minor": "8",
"gitVersion": "v1.8.0",
"gitCommit": "6e937839ac04a38cac63e6a7a306c5d035fe7b0a",
"gitTreeState": "clean",
"buildDate": "2017-09-28T22:46:41Z",
"goVersion": "go1.8.3",
"compiler": "gc",
"platform": "linux/amd64"
}%
9. Bootstrapping the Kubernetes Worker Nodes
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute ssh worker-0
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-40-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
_____________________________________________________________________
WARNING! Your environment specifies an invalid locale.
The unknown environment variables are:
LC_CTYPE=en_IE.UTF-8 LC_ALL=
This can affect your user experience significantly, including the
ability to manage packages. You may install the locales by running:
sudo apt-get install language-pack-en
or
sudo locale-gen en_IE.UTF-8
To see all available language packs, run:
apt-cache search "^language-pack-[a-z][a-z]$"
To disable this message for all users, run:
sudo touch /var/lib/cloud/instance/locale-check.skip
_____________________________________________________________________
zhanming.cui@worker-0:~$ sudo apt-get -y install socat
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
socat
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 321 kB of archives.
After this operation, 941 kB of additional disk space will be used.
Get:1 http://us-west1.gce.archive.ubuntu.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB]
Fetched 321 kB in 0s (9975 kB/s)
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "en_IE.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Selecting previously unselected package socat.
(Reading database ... 67089 files and directories currently installed.)
Preparing to unpack .../socat_1.7.3.1-1_amd64.deb ...
Unpacking socat (1.7.3.1-1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up socat (1.7.3.1-1) ...
zhanming.cui@worker-0:~$ wget -q --show-progress --https-only --timestamping \
> https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
> https://github.com/kubernetes-incubator/cri-containerd/releases/download/v1.0.0-alpha.0/cri-containerd-1.0.0-alpha.0.tar.gz \
> https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl \
> https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-proxy \
> https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubelet
cni-plugins-amd64-v0.6.0.tgz 100%[=========================================================================================================================================>] 14.67M 3.01MB/s in 6.4s
cri-containerd-1.0.0-alpha.0.tar.gz 100%[=========================================================================================================================================>] 50.95M 12.8MB/s in 6.0s
kubectl 100%[=========================================================================================================================================>] 49.84M 271MB/s in 0.2s
kube-proxy 100%[=========================================================================================================================================>] 45.64M 154MB/s in 0.3s
kubelet 100%[=========================================================================================================================================>] 131.13M 167MB/s in 0.8s
zhanming.cui@worker-0:~$ sudo mkdir -p \
> /etc/cni/net.d \
> /opt/cni/bin \
> /var/lib/kubelet \
> /var/lib/kube-proxy \
> /var/lib/kubernetes \
> /var/run/kubernetes
zhanming.cui@worker-0:~$ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
./
./flannel
./ptp
./host-local
./portmap
./tuning
./vlan
./sample
./dhcp
./ipvlan
./macvlan
./loopback
./bridge
zhanming.cui@worker-0:~$ sudo tar -xvf cri-containerd-1.0.0-alpha.0.tar.gz -C /
./
./usr/
./usr/local/
./usr/local/sbin/
./usr/local/sbin/runc
./usr/local/bin/
./usr/local/bin/crictl
./usr/local/bin/containerd
./usr/local/bin/containerd-stress
./usr/local/bin/containerd-shim
./usr/local/bin/ctr
./usr/local/bin/cri-containerd
./etc/
./etc/systemd/
./etc/systemd/system/
./etc/systemd/system/containerd.service
./etc/systemd/system/cri-containerd.service
zhanming.cui@worker-0:~$ chmod +x kubectl kube-proxy kubelet
zhanming.cui@worker-0:~$ sudo mv kubectl kube-proxy kubelet /usr/local/bin/
zhanming.cui@worker-0:~$ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
> http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
zhanming.cui@worker-0:~$ echo $POD_CIDR
10.200.0.0/24
zhanming.cui@worker-0:~$ cat > 10-bridge.conf <<EOF
> {
> "cniVersion": "0.3.1",
> "name": "bridge",
> "type": "bridge",
> "bridge": "cnio0",
> "isGateway": true,
> "ipMasq": true,
> "ipam": {
> "type": "host-local",
> "ranges": [
> [{"subnet": "${POD_CIDR}"}]
> ],
> "routes": [{"dst": "0.0.0.0/0"}]
> }
> }
> EOF
zhanming.cui@worker-0:~$ cat > 99-loopback.conf <<EOF
> {
> "cniVersion": "0.3.1",
> "type": "loopback"
> }
> EOF
zhanming.cui@worker-0:~$ sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
zhanming.cui@worker-0:~$ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
zhanming.cui@worker-0:~$ sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
zhanming.cui@worker-0:~$ sudo mv ca.pem /var/lib/kubernetes/
zhanming.cui@worker-0:~$ cat > kubelet.service <<EOF
> [Unit]
> Description=Kubernetes Kubelet
> Documentation=https://github.com/GoogleCloudPlatform/kubernetes
> After=cri-containerd.service
> Requires=cri-containerd.service
>
> [Service]
> ExecStart=/usr/local/bin/kubelet \\
> --allow-privileged=true \\
> --anonymous-auth=false \\
> --authorization-mode=Webhook \\
> --client-ca-file=/var/lib/kubernetes/ca.pem \\
> --cluster-dns=10.32.0.10 \\
> --cluster-domain=cluster.local \\
> --container-runtime=remote \\
> --container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
> --image-pull-progress-deadline=2m \\
> --kubeconfig=/var/lib/kubelet/kubeconfig \\
> --network-plugin=cni \\
> --pod-cidr=${POD_CIDR} \\
> --register-node=true \\
> --require-kubeconfig \\
> --runtime-request-timeout=15m \\
> --tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
> --tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
> --v=2
> Restart=on-failure
> RestartSec=5
>
> [Install]
> WantedBy=multi-user.target
> EOF
zhanming.cui@worker-0:~$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
zhanming.cui@worker-0:~$ cat > kube-proxy.service <<EOF
> [Unit]
> Description=Kubernetes Kube Proxy
> Documentation=https://github.com/GoogleCloudPlatform/kubernetes
>
> [Service]
> ExecStart=/usr/local/bin/kube-proxy \\
> --cluster-cidr=10.200.0.0/16 \\
> --kubeconfig=/var/lib/kube-proxy/kubeconfig \\
> --proxy-mode=iptables \\
> --v=2
> Restart=on-failure
> RestartSec=5
>
> [Install]
> WantedBy=multi-user.target
> EOF
zhanming.cui@worker-0:~$ sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
zhanming.cui@worker-0:~$ sudo systemctl daemon-reload
zhanming.cui@worker-0:~$ sudo systemctl enable containerd cri-containerd kubelet kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/cri-containerd.service to /etc/systemd/system/cri-containerd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /etc/systemd/system/kube-proxy.service.
zhanming.cui@worker-0:~$ sudo systemctl start containerd cri-containerd kubelet kube-proxy
zhanming.cui@worker-0:~$ exit
zhanming.cui@ITs-MacBook-Pro:~|⇒ gcloud compute ssh controller-0
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-40-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Tue Dec 5 15:24:47 2017 from 87.198.172.217
zhanming.cui@controller-0:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 10m v1.8.0
worker-1 Ready <none> 2m v1.8.0
worker-2 Ready <none> 57s v1.8.0
10. Configuring kubectl for Remote Access
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ echo $KUBERNETES_PUBLIC_ADDRESS
104.198.96.15
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl config set-cluster csihome-kubernetes-hard-way-worker-cluster --certificate-authority=ca.pem --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
Cluster "csihome-kubernetes-hard-way-worker-cluster" set.
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem
User "admin" set.
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl config set-context csihome-kubernetes-hard-way-context --cluster=csihome-kubernetes-hard-way-worker-cluster --user=admin
Context "csihome-kubernetes-hard-way-context" created.
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl config use-context csihome-kubernetes-hard-way-context
Switched to context "csihome-kubernetes-hard-way-context".
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 25m v1.8.0
worker-1 Ready <none> 17m v1.8.0
worker-2 Ready <none> 16m v1.8.0
11. Provisioning Pod Network Routes
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ for i in 0 1 2; do
gcloud compute routes create csihome-kubernetes-hard-way-route-10-200-${i}-0-24 \
--network csihome-kubernetes-hard-way-vpc-network \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/routes/csihome-kubernetes-hard-way-route-10-200-0-0-24].
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
csihome-kubernetes-hard-way-route-10-200-0-0-24 csihome-kubernetes-hard-way-vpc-network 10.200.0.0/24 10.240.0.20 1000
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/routes/csihome-kubernetes-hard-way-route-10-200-1-0-24].
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
csihome-kubernetes-hard-way-route-10-200-1-0-24 csihome-kubernetes-hard-way-vpc-network 10.200.1.0/24 10.240.0.21 1000
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/routes/csihome-kubernetes-hard-way-route-10-200-2-0-24].
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
csihome-kubernetes-hard-way-route-10-200-2-0-24 csihome-kubernetes-hard-way-vpc-network 10.200.2.0/24 10.240.0.22 1000
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud compute routes list --filter "network: csihome-kubernetes-hard-way-vpc-network"
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
csihome-kubernetes-hard-way-route-10-200-0-0-24 csihome-kubernetes-hard-way-vpc-network 10.200.0.0/24 10.240.0.20 1000
csihome-kubernetes-hard-way-route-10-200-1-0-24 csihome-kubernetes-hard-way-vpc-network 10.200.1.0/24 10.240.0.21 1000
csihome-kubernetes-hard-way-route-10-200-2-0-24 csihome-kubernetes-hard-way-vpc-network 10.200.2.0/24 10.240.0.22 1000
default-route-4c74819f2a2023d0 csihome-kubernetes-hard-way-vpc-network 0.0.0.0/0 default-internet-gateway 1000
default-route-9a52320ea474c88f csihome-kubernetes-hard-way-vpc-network 10.240.0.0/24 1000
12. Deploying the DNS Cluster Add-on
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
serviceaccount "kube-dns" created
configmap "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-7797cb8758-6l8m7 3/3 Running 0 2m
kube-dns-7797cb8758-csn9n 3/3 Running 0 2m
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl run busybox --image=busybox --command -- sleep 3600
deployment "busybox" created
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl get pods -l run=busybox
NAME READY STATUS RESTARTS AGE
busybox-56db8bd9d7-w2vsv 0/1 ContainerCreating 0 6s
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ echo $POD_NAME
busybox-56db8bd9d7-w2vsv
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl exec -ti $POD_NAME -- nslookup kubernetes
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local %
13. Smoke Test
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl create secret generic csihome-kubernetes-hard-way-smoke-secret \
--from-literal="mykey=mydata"
secret "csihome-kubernetes-hard-way-smoke-secret" created
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud compute ssh controller-0 \
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/csihome-kubernetes-hard-way-smoke-secret | hexdump -C"
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 63 73 69 68 6f 6d |s/default/csihom|
00000020 65 2d 6b 75 62 65 72 6e 65 74 65 73 2d 68 61 72 |e-kubernetes-har|
00000030 64 2d 77 61 79 2d 73 6d 6f 6b 65 2d 73 65 63 72 |d-way-smoke-secr|
00000040 65 74 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 |et.k8s:enc:aescb|
00000050 63 3a 76 31 3a 6b 65 79 31 3a c4 3b 91 d6 e3 53 |c:v1:key1:.;...S|
00000060 d8 e0 e1 14 2e 81 75 68 c2 08 de 18 84 b1 57 e0 |......uh......W.|
00000070 84 bd 7b 8d 33 5a db cf ca ec 9d 0a 04 d4 dc 8f |..{.3Z..........|
00000080 91 e4 46 99 7b 51 eb 09 45 ee 6f 40 06 69 db a7 |..F.{Q..E.o@.i..|
00000090 a8 b8 31 e8 cc 16 38 7e e9 44 d2 44 4a 79 ae 7e |..1...8~.D.DJy.~|
000000a0 fa da 12 38 05 7e 3e 87 5d 3c f3 7b ae f7 bc 90 |...8.~>.]<.{....|
000000b0 45 a0 1e 3c 69 ff 6b a0 f2 74 97 6b 1c ae e1 23 |E..<i.k..t.k...#|
000000c0 73 41 66 98 04 60 44 47 af a9 53 c4 c4 ca 10 cc |sAf..`DG..S.....|
000000d0 fd ce e0 66 68 03 6a 4f f9 f8 54 0a a5 64 0f e0 |...fh.jO..T..d..|
000000e0 e3 02 be 11 73 e4 59 69 04 5e 72 fa d2 c3 21 1a |....s.Yi.^r...!.|
000000f0 3b ff d8 7a c7 8a dd 02 52 44 8c fc aa b2 3a c9 |;..z....RD....:.|
00000100 73 06 e4 de eb d2 e3 09 18 07 8e 81 a9 9b c9 dd |s...............|
00000110 14 2f 9b ff b0 be fa a5 e1 6e 0a |./.......n.|
0000011b
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl run nginx --image=nginx
deployment "nginx" created
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl get pods -l run=nginx
NAME READY STATUS RESTARTS AGE
nginx-7c87f569d-6pld7 1/1 Running 0 33s
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ echo $POD_NAME
nginx-7c87f569d-6pld7
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl port-forward $POD_NAME 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Handling connection for 8080
^C%
zhanming.cui@ITs-MacBook-Pro:~|⇒ curl --head http://127.0.0.1:8080
HTTP/1.1 200 OK
Server: nginx/1.13.7
Date: Tue, 05 Dec 2017 18:07:27 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
Connection: keep-alive
ETag: "5a1437f4-264"
Accept-Ranges: bytes
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl logs $POD_NAME
127.0.0.1 - - [05/Dec/2017:18:07:27 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.43.0" "-"
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl exec -ti $POD_NAME -- nginx -v
nginx version: nginx/1.13.7
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ kubectl expose deployment nginx --port 80 --type NodePort
service "nginx" exposed
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ echo $NODE_PORT
32508
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud compute firewall-rules create csihome-kubernetes-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network csihome-kubernetes-hard-way-vpc-network
Creating firewall...\
Created [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/firewalls/csihome-kubernetes-hard-way-allow-nginx-service].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
csihome-kubernetes-hard-way-allow-nginx-service csihome-kubernetes-hard-way-vpc-network INGRESS 1000 tcp:32508
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ echo $EXTERNAL_IP
35.197.60.255
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
HTTP/1.1 200 OK
Server: nginx/1.13.7
Date: Tue, 05 Dec 2017 18:14:36 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
Connection: keep-alive
ETag: "5a1437f4-264"
Accept-Ranges: bytes
14. Clean up
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/instances/controller-2].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/instances/worker-0].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/instances/worker-2].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/instances/controller-0].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/instances/controller-1].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/zones/us-west1-c/instances/worker-1].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud --quiet compute forwarding-rules delete csihome-kubernetes-hard-way-forwarding-rule --region $(gcloud config get-value compute/region)
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/forwardingRules/csihome-kubernetes-hard-way-forwarding-rule].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute target-pools delete csihome-kubernetes-hard-way-lb-pool
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/targetPools/csihome-kubernetes-hard-way-lb-pool].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute addresses delete csihome-kubernetes-hard-way-public-ip
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/addresses/csihome-kubernetes-hard-way-public-ip].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute firewall-rules delete csihome-kubernetes-hard-way-allow-nginx-service csihome-kubernetes-hard-way-allow-internal csihome-kubernetes-hard-way-allow-external
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/firewalls/csihome-kubernetes-hard-way-allow-nginx-service].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/firewalls/csihome-kubernetes-hard-way-allow-internal].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/firewalls/csihome-kubernetes-hard-way-allow-external].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute routes delete csihome-kubernetes-hard-way-route-10-200-0-0-24 csihome-kubernetes-hard-way-route-10-200-1-0-24 csihome-kubernetes-hard-way-route-10-200-2-0-24
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/routes/csihome-kubernetes-hard-way-route-10-200-1-0-24].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/routes/csihome-kubernetes-hard-way-route-10-200-0-0-24].
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/routes/csihome-kubernetes-hard-way-route-10-200-2-0-24].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute networks subnets delete csihome-kubernetes-hard-way-vpc-subnet
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/regions/us-west1/subnetworks/csihome-kubernetes-hard-way-vpc-subnet].
zhanming.cui@ITs-MacBook-Pro:~/Synchronoss/Kubernetes-on-GCP|
⇒ gcloud -q compute networks delete csihome-kubernetes-hard-way-vpc-network
Deleted [https://www.googleapis.com/compute/v1/projects/csihome-kubernetes-hard-way/global/networks/csihome-kubernetes-hard-way-vpc-network].