<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                **第一章 簡介** PAAS:PaaS是Platform-as-a-Service的縮寫,意思是平臺即服務。 把服務器平臺作為一種服務提供的商業模式。通過網絡進行程序提供的服務稱之為SaaS(Software as a Service),而云計算時代相應的服務器平臺或者開發環境作為服務進行提供就成為了PaaS(Platform as a Service)。 Docker: Docker 是一個開源的應用容器引擎,讓開發者可以打包他們的應用以及依賴包到一個可移植的容器中,然后發布到任何流行的 Linux 機器上,也可以實現虛擬化。 Zookeeper: ZooKeeper是一個分布式的,開放源碼的分布式應用程序協調服務,它包含一個簡單的原語集,分布式應用程序可以基于它實現同步服務,配置維護和命名服務等。 MooseFS: MooseFS是一種分布式文件系統。 Kubernetes:Kubernetes是Google開源的容器集群管理系統。它構建Ddocker技術之上,為容器化的應用提供資源調度、部署運行、服務發現、擴容縮容等整一套功能。 Etcd:etcd組件作為一個高可用強一致性的服務發現存儲倉庫。 Keepalived: 基于VRRP協議來實現的WEB服務高可用方案,可以利用其來避免單點故障。 Calico是一個基于BGP協議的虛擬網絡工具,在數據中心中的虛擬機、容器或者裸金屬機器(在這里都稱為workloads)只需要一個IP地址就可以使用Calico實現互連。 **第二章 準備環境** 2.1、鏡像文件 準備CentOS1611及docker-extra兩個鏡像包文件。積分測試環境使用的阿里云yum源,此步驟可忽略。 2.2、配置本地yum源 ~~~ vi /etc/yum.repos.d/CentOS7_1611.repo [CentOS7_1611-media] name=CentOS7_1611-media baseurl=http://10.255.224.27/CentOS1611/ gpgcheck=0 enabled=1 [1611-entry] name=1611-entry baseurl=http://10.255.224.95/docker-extra/ gpgcheck=0 enabled=1 ~~~ **第三章 整體架構圖** 3.1、集群架構 3.2、版本信息 服務名稱 現生產版本 wlw版本 ~~~ kubernetes v1.2.0 v1.5.2 docker 1.9.1 1.12.6 etcd 2.2.5 3.1.0 docker-distribution docker-registryV1 Docker Registry V2 calico 0.18 0.18 zookeeper 3.4.6 3.4.6 ~~~ 3.3、kubernetes集群架構 **第四章 配置crt通信證書** 4.1、配置apiserver crt證書 1、下載easyrsa3: ~~~ curl -L -O https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz PS:easy-rsa.tar.gz已經放在盤上 tar xzf easy-rsa.tar.gz cd easy-rsa-master/easyrsa3 ~~~ 2、 ./easyrsa init-pki 3、創建CA: ./easyrsa --batch "--req-cn=${MASTER_IP} @`date +%s`" build-ca nopass (如果要使用默認的service訪問kubernetes集群,使用--req-cn=*,IP地址為master節點的IP) 4、生成服務使用的cert和key: ./easyrsa --subject-alt-name="IP: ${MASTER_IP} " build-server-full kubernetes-master nopass(如果要使用默認的service訪問kubernetes集群,修改${MASTER_IP}為*要在IP后邊配置service的IP) 5、將生成文件移入生產目錄下 ~~~ mkdir -p /srv/kubernetes cp pki/ca.crt /srv/kubernetes/ cp pki/issued/kubernetes-master.crt /srv/kubernetes/server.crt cp pki/private/kubernetes-master.key /srv/kubernetes/server.key ~~~ 6、配置ServiceAccount: ~~~ openssl genrsa -out /srv/kubernetes/serviceaccount.key 2048 PS:修改證書目錄授權chmod 777 -R /srv/kubernetes/ ~~~ **第五章 搭建Etcd集群** 5.1安裝Etcd 執行命令: ` rpm -ivh etcd-3.1.0-2.el7.x86_64.rpm` 5.2配置Etcd 以minion1機器為例,修改配置如下: ~~~ vi /etc/etcd/etcd.conf # [member] ETCD_NAME=GPRSDX1 # 不同的 etcd 主機定義不同的 NAME ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS=”http://0.0.0.0:2380” # 定義peer綁定端口,即內部集群通 信端口 ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379” # 定義client綁定端口,即 client 訪問通信端口 #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.255.224.17:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="GPRSDX1=http://10.255.224.17:2380,GPRSDX2=http://10.255.224.19:2380,gprsdx3=http://10.255.224.91:2380" # ETCD_INITIAL_CLUSTER 定義集群成員 ETCD_INITIAL_CLUSTER_STATE="new" # 初始化狀態使用 new,建立之后改此值為 existing ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster" # etcd 集群名 ETCD_ADVERTISE_CLIENT_URLS=”http://10.255.224.17:2379” # 定義 client 廣播端口,此處必須填寫相應主機的 IP,不能填寫 0.0.0.0,否則 etcd client 獲取不了 etcd cluster 中的主機 #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" ~~~ 注意: 如果集群中的機器已經運行過etcd服務,只修改etcd的配置重啟服務是不能把機器加到etcd集群中的,需要刪除etcd的數據之后重啟服務。 Etcd的數據目錄見etcd配置文件的“ETCD_DATA_DIR” 檢查etcd服務的啟動文件: ~~~ vi /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf User=etcd # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ~~~ 5.3啟動Etcd systemctl start etcd.service 5.4檢查Etcd集群狀態 在任意一臺etcd主機上執行命令: ~~~ etcdctl member list 如果顯示結果和以下內容類似則集群安裝正常: 9e6e7b1fddbae15f: name=minion1 peerURLs=http://192.168.11.44:2380 clientURLs=http://192.168.11.44:2379 9ffcd715e2ce89d0: name=minion3 peerURLs=http://192.168.11.46:2380 clientURLs=http://192.168.11.46:2379 b4e5215afede02fd: name=minion2 peerURLs=http://192.168.11.45:2380 clientURLs=http://192.168.11.45:2379 ~~~ ~~~ 或者執行 etcdctl cluster-health member e5a6e1abedcc726 is healthy: got healthy result from http://10.255.224.91:2379 member 70994e5bc7e29cf9 is healthy: got healthy result from http://10.255.224.19:2379 member a463697b2fd4abb4 is healthy: got healthy result from http://10.255.224.17:2379 ~~~ cluster is healthy 5.5 ETCD對接Paas平臺 這里僅說明對接etcd集群對應的變更部分,Calico, kubernetes安裝配置見ECP安裝部署手冊。 5.5.1對接Calico 所有節點上執行: ? /etc/profile文件添加環境變量“ETCD_ENDPOINTS” 這個變量的值為ETCD集群里每個節點endpoint,以英文逗號隔開。如果之前對接單節etcd添加過環境變量“ETCD_ENDPOINTS”請注釋或刪除“ETCD_ENDPOINTS” `export ETCD_ENDPOINTS=http://192.168.11.44:2379, http://192.168.11.45:2379, http://192.168.11.46:2379` ? 修改/etc/systemd/calico-node.service 修改Environment參數,從單機版的“ETCD_AUTHORITY”變更為“ETCD_ENDPOINTS”參數值也做對應的變更。 `Environment=ETCD_ENDPOINTS= http://192.168.11.44:2379, http://192.168.11.45:2379, http://192.168.11.46:2379` Minion節點上執行: ? 修改/etc/cni/net.d/10-calico.conf配置文件。 修改對接單機版本的“etcd_authority”為"etcd_endpoints",同時修改對應的值。 ~~~ $ cat /etc/cni/net.d/10-calico.conf { "name" : "calico-k8s-network", "type" : "calico", " etcd_endpoints " : " ETCD_ENDPOINTS=http://192.168.11.44:2379, http://192.168.11.45:2379, http://192.168.11.46:2379", "log_level" : "info", "ipam" : { "type" : "calico-ipam" } } ~~~ 重新啟動calico服務. 運行命令: calicoctl node 檢查節點狀態正常 5.5.2對接Kubernetes Master節點上修改API Server的配置文件: 修改:/etc/kubernetes/apiserver `KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.11.44:2379, http://192.168.11.45:2379, http://192.168.11.46:2379"` 重新啟動kubernetes相關服務。 **第六章 搭建Zookeeper集群** 6.1 安裝 Zookeeper 1、將下載的Zookeeper的安裝包解壓至安裝目錄: `tar -zxvf zookeeper-3.4.6.tar.gz -C /data01/wlwjf/app/zookeeper-3.4.6/` 6.2 Zookeeper的配置 ~~~ vi zoo.cfg # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data01/wlwjf/app/data/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=10.255.224.96:2888:3888 ###安裝zk的其他主機地址 server.2=10.255.224.97:2888:3888 server.3=10.255.224.98:2888:3888 ~~~ 6.3 查看啟動文件配置 ~~~ vi /usr/lib/systemd/system/zookeeper.service [Unit] Description=Zookeeper service After=network.target [Service] User=wlwjf Group=wlwjf SyslogIdentifier=wlwjf Environment=ZHOME=/data01/wlwjf/app/zookeeper-3.4.6 ExecStart=/usr/bin/java \ -Dzookeeper.log.dir=${ZHOME}/logs/zookeeper.log \ -Dzookeeper.root.logger=INFO,CONSOLE \ -cp ${ZHOME}/zookeeper-3.4.6.jar:${ZHOME}/lib/* \ -Dlog4j.configuration=file:${ZHOME}/conf/log4j.properties \ -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.local.only=false \ org.apache.zookeeper.server.quorum.QuorumPeerMain \ ${ZHOME}/conf/zoo.cfg [Install] WantedBy=multi-user.target ~~~ 6.4 啟動Zookeeper Systemctl start zookeeper.service 6.5 檢查ZK 安裝目錄下的bin目錄下執行: ~~~ ./zkServer.sh status JMX enabled by default Using config: /data01/wlwjf/app/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower ~~~ 第七章 安裝docker 準備環境:需單獨創建一個VG,取名為docker-vg 7.1 安裝docker yum -u install dokcer 7.2 修改配置文件 ~~~ # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi # If you want to add your own registry to be used for docker search and docker # pull use the ADD_REGISTRY option to list a set of registries, each prepended # with --add-registry flag. The first registry added will be the first registry # searched. # ADD_REGISTRY='--add-registry registry.access.redhat.com:5000' # If you want to block registries from being used, uncomment the BLOCK_REGISTRY # option and give it a set of registries, each prepended with --block-registry # flag. For example adding docker.io will stop users from downloading images # from docker.io # BLOCK_REGISTRY='--block-registry' # If you have a registry secured with https but do not have proper certs # distributed, you can tell docker to not look for full authorization by # adding the registry to the INSECURE_REGISTRY line and uncommenting it. INSECURE_REGISTRY='--insecure-registry registry.wlw.com:5000' #####配置本地倉庫地址,修改/etc/hosts文件設置本地的倉庫地址改為registry registry.wlw # On an SELinux system, if you remove the --selinux-enabled option, you # also need to turn on the docker_transition_unconfined boolean. # setsebool -P docker_transition_unconfined 1 # Location used for temporary files, such as those created by # docker load and build operations. Default is /var/lib/docker/tmp # Can be overriden by setting the following environment variable. # DOCKER_TMPDIR=/var/tmp # Controls the /etc/cron.daily/docker-logrotate cron job status. # To disable, uncomment the line below. # LOGROTATE=false # # docker-latest daemon can be used by starting the docker-latest unitfile. # To use docker-latest client, uncomment below lines #DOCKERBINARY=/usr/bin/docker-latest #DOCKERDBINARY=/usr/bin/dockerd-latest #DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest #DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest vi /etc/sysconfig/docker-storage-setup # Edit this file to override any configuration options specified in # /usr/lib/docker-storage-setup/docker-storage-setup. # # For more details refer to "man docker-storage-setup" VG=docker-vg ####需提前創建該VG SETUP_LVM_THIN_POOL=yes ~~~ 7.3 啟動docker systemctl start docker 7.4 檢查docker啟動狀態 systemctl status docker 第八章 安裝calico網絡插件 calico負責為容器分配虛擬IP地址,使用主機的生產網卡進行容器的跨主機通信。 8.1 master節點安裝(使用v0.18.0版本) 安裝calicoctl: `wget -o /usr/bin/calicoctl https://github.com/projectcalico/calico-containers/releases/download/v0.18.0/calicoctl ` chmod +x /usr/bin/calicoctl 8.2 minion節點安裝 參考master節點安裝calicoctl。 ~~~ 安裝calico-cni擴展: wget -N -P /opt/cni/bin https://github.com/projectcalico/calico-cni/releases/download/v1.1.0/calico chmod +x /opt/cni/bin/calico wget -N -P /opt/cni/bin https://github.com/projectcalico/calico-cni/releases/download/v1.1.0/calico-ipam chmod +x /opt/cni/bin/calico-ipam ~~~ 8.3 配置master節點 配置calico-node: ~~~ 創建文件/etc/systemd/calico-node.service,設置etcd集群服務地址 [Unit] Description=calicoctl node After=docker.service Requires=docker.service [Service] User=root Environment=ETCD_ENDPOINTS=${ETCD主機管理網卡IP}:2379 PermissionsStartOnly=true ExecStart=/usr/bin/calicoctl node --ip=${主機生產網卡IP}--detach=false Restart=always RestartSec=10 [Install] WantedBy=multi-user.target 將會使用鏡像calico/node:v0.18.0,啟動服務。 將準備的鏡像加載到本機: docker load -i calico.tar 檢查 dokcer images 將calico-node配置成開機啟動服務,并啟動。 在環境變量中增加ETCD_ENDPOINTS=master:2379的配置。 systemctl enable /etc/systemd/calico-node.service service calico-node restart 執行命令加載內核模塊: modprobe ip_set modprobe ip6_tables modprobe ip_tables ~~~ 8.4 配置node節點 1 參考master配置,配置calico-node。 2 配置cni網絡聲明: ~~~ $ cat /etc/cni/net.d/10-calico.conf { "name" : "calico-k8s-network", "type" : "calico", "etcd_ENDPOINTS" : "${ETCD主機管理網卡IP}:4001", "log_level" : "info", "ipam" : { "type" : "calico-ipam" } } 8.5 配 ~~~置calico IP池 在master節點配置calico使用的IP池。 查看IP池是否存在: calicoctl pool show 如果存在IP池,且ip池使用的網段和規劃的不一樣,刪除掉已經存在的IP池,重新添加正確的IP池。 calicoctl pool remove ${CIDR} calicoctl pool add 192.168.0.0/16 第九章 搭建Kubernetes集群 9.1 yum源安裝k8s-1.5.2版本 yum -y install kubernetes-1.5.2 9.2 master配置 拷貝證書文件到所有的Master node主機 cd /srv/kubernetes/ scp * root@10.255.224.19:/srv/kubernetes/ 9.2.1 apiserver配置 API Server提供了HTTP Rest接口的關鍵服務進程,是k8s里所有資源的增刪改查等操作的唯一入口,也是集群控制的入口進程。 cat /etc/kubernetes/apiserver KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"##表示使用全部網絡接口 KUBE_ETCD_SERVERS="--etcd-servers=http://10.255.224.17:2379,http://10.255.224.19:2379,http://10.255.224.91:2379" ###etcd服務列表設置etcd服務的訪問地址,如果etcd是集群部署,在參數中配置多個etcd服務的地址,用逗號分隔 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" ###service的虛擬IP池,這個地址不能與物理機網絡重合 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" ## NamespaceLifecycle 如果嘗試在一個不存在的namespaace中創建資源對象,則該創建請求將被拒絕,當刪除一個namespace時,系統會刪除該namespace中的所有對象,包括pod,service等。 ## LimitRanger 用于配額管理,作用于pod和container上,確保pod與container上的配額不會超標。 ## SecurityContextDeny 定義了操作系統級別的安全設定 ### ServiceAccount 實現自動化。 ### ResourceQuota 作用于配額管理,作用于namespace上,它會觀察所有的請求,確保在namespace上不會超標。 KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt ##客戶端證書將被用于認證過程 --tls-cert-file=/srv/kubernetes/server.crt ###包含x509證書的文件路徑,用于https認證 --tls-private-key-file=/srv/kubernetes/server.key 包含x509與tls-cert-file對應的私鑰文件路徑 --service_account_key_file=/srv/kubernetes/serviceaccount.key" 包含PEM-encoded x509 RSA公鑰和私鑰 的文件路徑,用于驗證service account的token。不指定則使用tls-private-key-file指定的文件。 啟動配置文件: [root@GPRSDX1 ~]# cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver User=kube ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target 9.2.2 scheduler配置 Kubernetes Scheduler負責資源調度(Pod調度)的進程,通過控制,使pod選擇最優的node節點運行。相當于公交公司的“調度室”。 # Add your own! KUBE_CONTROLLER_MANAGER_ARGS=" --leader-elect=true " ##進行leader選舉,用于多個master組件的高可用部署。 啟動配置文件: [root@GPRSDX1 ~]# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler User=kube ExecStart=/usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 9.2.3 controller-manager配置 kubernetes里所有資源對象的自動化控制中心,可以理解為資源對象的“大總管”。 /etc/kubernetes/controller-manager 中添加以下內容: KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/srv/kubernetes/ca.crt --service_account_private_key_file=/srv/kubernetes/server.key ### --terminated-pod-gc-threshold=12500 ##控制pod的數量 --leader-elect=true" ##進行leader選舉 啟動配置文件: [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager User=kube ExecStart=/usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 9.3 node配置 9.3.1 kuberlet 配置 Kubelet負責Pod對應的容器的創建、啟停等任務,同時與Master節點密切協作,實現集群管理的基本功能。 [root@GPRSDX1 ~]# cat /etc/kubernetes/kubelet |grep -v "^#" |grep -v "^$" KUBELET_ADDRESS="--address=0.0.0.0" ---綁定主機IP地址 KUBELET_HOSTNAME="--hostname-override=GPRSDX1" ---本node在集群中的主機名 KUBELET_API_SERVER="--api-servers=http://10.255.224.90:8089" ---api_server的IP地址及端口 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" ---用于pod內網絡命名空間共享的基礎pause鏡像 KUBELET_ARGS="--network-plugin-dir=/etc/cni/net.d ## 掃描網絡查件的目錄 --network-plugin=cni ##自定義網絡插件的名字 --cluster-dns=10.254.0.3 ##集群內DNS服務的IP地址 --cluster-domain=cluster.local" ##集群內DNS服務所用的域名 啟動配置文件: [root@GPRSDX1 ~]# cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \ $KUBE_LOGTOSTDERR \ --network-plugin-dir=/etc/cni/net.d \ --network-plugin=cni \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target 9.3.2 kube-proxy 配置 kube-proxy實現kubernetes Service的通信與負載均衡的重要組件。 cat /etc/kubernetes/proxy |grep -v "^#" |grep -v "^$" KUBE_PROXY_ARGS="--proxy-mode=iptables" ---- 代理模式 啟動配置文件: [root@GPRSDX1 ~]# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target 9.4 公共參數配置 cat /etc/kubernetes/config |grep -v "^#" |grep -v "^$" KUBE_LOGTOSTDERR="--logtostderr=true" ----日志輸出到文件同時輸出到stderr KUBE_LOG_LEVEL="--v=0" ------日志級別 KUBE_ALLOW_PRIV="--allow-privileged=true" ---k8s允許pod中運行系統特權的容器應用 KUBE_MASTER="--master=http://10.255.224.90:8089" ---k8s的主節點浮動ip及端口 9.5 啟動檢查各服務 啟動各服務: 主節點master啟動以下服務: systemctl start kube-apiserver.service systemctl start kube-controller-manager.service systemctl start kube-scheduler.service 檢查服務啟動情況: systemctl status kube-apiserver.service systemctl status kube-controller-manager.service systemctl status kube-scheduler.service 或者: kubectl get componentstatuses node節點啟動: systemctl start kubelet.service systemctl start kube-proxy.service 檢查服務啟動情況: systemctl status kube-proxy.service systemctl status kubelet.service 9.6 各服務介紹 master運行三個組件: apiserver:作為kubernetes系統的入口,封裝了核心對象的增刪改查操作,以RESTFul接口方式提供給外部客戶和內部組件調用。它維護的REST對象將持久化到etcd(一個分布式強一致性的key/value存儲)。 scheduler:負責集群的資源調度,為新建的pod分配機器。這部分工作分出來變成一個組件,意味著可以很方便地替換成其他的調度器。 controller-manager:負責執行各種控制器,目前有兩類: endpoint-controller:定期關聯service和pod(關聯信息由endpoint對象維護),保證service到pod的映射總是最新的。 replication-controller:定期關聯replicationController和pod,保證replicationController定義的復制數量與實際運行pod的數量總是一致的。 slave(稱作minion)運行兩個組件: kubelet:負責管控docker容器,如啟動/停止、監控運行狀態等。它會定期從etcd獲取分配到本機的pod,并根據pod信息啟動或停止相應的容器。同時,它也會接收apiserver的HTTP請求,匯報pod的運行狀態。 proxy:負責為pod提供代理。它會定期從etcd獲取所有的service,并根據service信息創建代理。當某個客戶pod要訪問其他pod時,訪問請求會經過本機proxy做轉發。 第十章 安裝本地鏡像倉庫 當前docker版本為1.13.0,docker-registry已替換為docker-distribution yum install docker-distribution 修改docker配置文件/etc/sysconfig/docker(標紅為需要修改部分) # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi # If you want to add your own registry to be used for docker search and docker # pull use the ADD_REGISTRY option to list a set of registries, each prepended # with --add-registry flag. The first registry added will be the first registry # searched. #ADD_REGISTRY='--add-registry registry.access.redhat.com' # If you want to block registries from being used, uncomment the BLOCK_REGISTRY # option and give it a set of registries, each prepended with --block-registry # flag. For example adding docker.io will stop users from downloading images # from docker.io # BLOCK_REGISTRY='--block-registry' # If you have a registry secured with https but do not have proper certs # distributed, you can tell docker to not look for full authorization by # adding the registry to the INSECURE_REGISTRY line and uncommenting it. INSECURE_REGISTRY='--insecure-registry registry.jf.com:5000' # On an SELinux system, if you remove the --selinux-enabled option, you # also need to turn on the docker_transition_unconfined boolean. # setsebool -P docker_transition_unconfined 1 # Location used for temporary files, such as those created by # docker load and build operations. Default is /var/lib/docker/tmp # Can be overriden by setting the following environment variable. # DOCKER_TMPDIR=/var/tmp # Controls the /etc/cron.daily/docker-logrotate cron job status. # To disable, uncomment the line below. # LOGROTATE=false # # docker-latest daemon can be used by starting the docker-latest unitfile. # To use docker-latest client, uncomment below lines #DOCKERBINARY=/usr/bin/docker-latest #DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest #DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest 修改/etc/hosts,為鏡像倉庫添加別名 重新啟動docker服務及docker-distribution服務 systemctl restart docker systemctl start docker-distribution 第十一章 安裝Keepalived Keepalived是一個基于VRRP協議來實現的WEB服務高可用方案,可以利用其來避免單點故障。使用多臺節點安裝keepalived。其他的節點用來提供真實的服務,同樣的,他們對外表現一個虛擬的IP。主服務器宕機的時候,備份服務器就會接管虛擬IP,繼續提供服務,從而保證了高可用性。 11.1環境配置 1、主Keepalived服務器IP地址 2、備Keepalived服務器IP地址 (目前是一主兩備) 3、Keepalived虛擬IP地址 (注意虛擬IP地址不要和其他主IP地址沖突) 軟件下載地址 http://www.keepalived.org/software/keepalived-1.1.20.tar.gz 11.2安裝流程 11.2.1上傳Keepalived至/home/目錄 11.2.2解壓Keepalived軟件 [root@localhost home]# tar -zxvf keepalived-1.1.20.tar.gz [root@localhost home]# cd keepalived-1.1.20 [root@localhost keepalived-1.1.20]# ln -s /usr/src/kernels/2.6.9-78.EL-i686/usr/src//linux [root@localhost keepalived-1.1.20]# ./configure 11.2.3編譯以及編譯安裝 [root@localhost keepalived-1.1.20]# make && make install 11.2.4修改配置文件路徑 [root@localhostkeepalived-1.1.20]#cp /usr/local/etc/rc.d/init.d/keepalived/etc/rc.d/init.d/ [root@localhostkeepalived-1.1.20]#cp usr/local/etc/sysconfig/keepalived /etc/sysconfig/ [root@localhost keepalived-1.1.20]# mkdir /etc/keepalived [root@localhostkeepalived-1.1.20]#cp /usr/local/etc/keepalived/keepalived.conf/etc/keepalived/ [root@localhost keepalived-1.1.20]# cp /usr/local/sbin/keepalived /usr/sbin/ 11.2.5設置為服務,開機自啟動 11.2.6主keepalived配置 修改配置文件 vi /etc/keepalived/keepalived.conf state: 狀態只有MASTER和BACKUP兩種,并且要大寫,MASTER為工作狀態,BACKUP是備用狀態。 interface:要綁定的網卡,根據機器的網卡填寫。 virtual_router_id:虛擬路由標識,同一個vrrp_instance的MASTER和BACKUP的vitrual_router_id 是一致的。 priority:優先級,同一個vrrp_instance的MASTER優先級必須比BACKUP高。 advert_int 1 :MASTER 與BACKUP 負載均衡器之間同步檢查的時間間隔,單位為秒。 authentication:包含驗證類型和驗證密碼。類型主要有PASS、AH 兩種,通常使用的類型為PASS,\ virtual_ipaddress: 虛擬ip地址,可以有多個地址,每個地址占一行,不需要子網掩碼 11.2.7備keepalived配置 11.2.8啟動服務 11.3驗證測試 在其他主機通過ssh連接虛擬ip,查看是否到達主Keepalived服務器IP地址。 第十二章 搭建中遇到的問題 1、 etcd主機掛掉后,主機重新安裝etcd,會啟動失敗。 需將老的在運行中的主節點刪除原來的member,再將新的加入進去。安裝后更新配置,啟動etcd,若在沒有將老的member刪除啟動過,需將data_dir目錄下的數據刪除。將配置文件中的ETCD_INITIAL_CLUSTER_STATE="existing"。 2、 啟動calico時報錯: Jun 5 19:58:10 WLWJFX7 systemd: Starting calicoctl node... Jun 5 19:58:11 WLWJFX7 calicoctl: Invalid ETCD_AUTHORITY. Address must take the form <address>:<port>. Value Jun 5 19:58:11 WLWJFX7 calicoctl: provided is Jun 5 19:58:11 WLWJFX7 calicoctl: 'http://10.255.224.96:2379,http://10.255.224.97:2379,http://10.255.224.98:2379' Jun 5 19:58:11 WLWJFX7 systemd: calico-node.service: main process exited, code=exited, status=1/FAILURE 解決方法::find . -name *calico* 將含有 ETCD_AUTHORITY 的文件刪除。重新啟動calico
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看