Ansible部署K8S集群

环境:

主机 IP地址 组件
ansible 192.168.175.130 ansible
master 192.168.175.140 docker,kubectl,kubeadm,kubelet
node 192.168.175.141 docker,kubectl,kubeadm,kubelet
node 192.168.175.142 docker,kubectl,kubeadm,kubelet

检查及调试相关命令:

$ ansible-playbook -v k8s-time-sync.yaml --syntax-check  
$ ansible-playbook -v k8s-*.yaml -C $ ansible-playbook -v k8s-yum-cfg.yaml -C --start-at-task="Clean origin dir" --step  
$ ansible-playbook -v k8s-kernel-cfg.yaml --step  

主机inventory文件:

/root/ansible/hosts

[k8s_cluster]  
master ansible_host=192.168.175.140  
node1  ansible_host=192.168.175.141  
node2  ansible_host=192.168.175.142  
  
[k8s_cluster:vars]  
ansible_port=22  
ansible_user=root  
ansible_password=hello123 ```  
  
### 检查网络:k8s-check.yaml  
  
- 检查`k8s`各主机的网络是否可达;  
- 检查`k8s`各主机操作系统版本是否达到要求;  
  
```yml  
- name: step01_check  
 hosts: k8s_cluster gather_facts: no tasks: - name: check network shell: cmd: "ping -c 3 -m 2 {{ansible_host}}" delegate_to: localhost  
 - name: get system version shell: cat /etc/system-release register: system_release  
 - name: check system version vars: system_version: "{{ system_release.stdout | regex_search('([7-9].[0-9]+).*?') }}" suitable_version: 7.5 debug: msg: "{{ 'The version of the operating system is '+ system_version +', suitable!' if (system_version | float >= suitable_version) else 'The version of the operating system is unsuitable' }}"```  
  
调试命令:  
  
```bash  
$ ansible-playbook --ssh-extra-args '-o StrictHostKeyChecking=no' -v -C k8s-check.yaml  
  
$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v -C k8s-check.yaml  
  
$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v k8s-check.yaml --start-at-task="get system version"  

连接配置:k8s-conn-cfg.yaml

  • ansible服务器的/etc/hosts文件中添加k8s主机名解析配置
  • 生成密钥对,配置ansible免密登录到k8s各主机
- name: step02_conn_cfg  
 hosts: k8s_cluster gather_facts: no vars_prompt: - name: RSA prompt: Generate RSA or not(Yes/No)? default: "no" private: no  
 - name: password prompt: input your login password? default: "hello123"  
 tasks: - name: Add DNS of k8s to ansible delegate_to: localhost lineinfile: path: /etc/hosts line: "{{ansible_host}}  {{inventory_hostname}}" backup: yes  
 - name: Generate RSA run_once: true delegate_to: localhost shell: cmd: ssh-keygen -t rsa -f ~/.ssh/id_rsa -N '' creates: /root/.ssh/id_rsa when: RSA | bool  
 - name: Configure password free login delegate_to: localhost shell: | /usr/bin/ssh-keyscan {{ ansible_host }} >> /root/.ssh/known_hosts 2> /dev/null /usr/bin/ssh-keyscan {{ inventory_hostname }} >> /root/.ssh/known_hosts 2> /dev/null /usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ ansible_host }} #/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ inventory_hostname }}  
 - name: Test ssh shell: hostname```  
  
**执行:**  
  
```bash  
$ ansible-playbook k8s-conn-cfg.yaml  
Generate RSA or not(Yes/No)? [no]: yes  
input your login password? [hello123]:  
  
PLAY [step02_conn_cfg] **********************************************************************************************************  
  
TASK [Add DNS of k8s to ansible] ************************************************************************************************  
ok: [master -> localhost]  
ok: [node1 -> localhost]  
ok: [node2 -> localhost]  
  
TASK [Generate RSA] *************************************************************************************************************  
changed: [master -> localhost]  
  
TASK [Configure password free login] ********************************************************************************************  
changed: [node1 -> localhost]  
changed: [master -> localhost]  
changed: [node2 -> localhost]  
  
TASK [Test ssh] *****************************************************************************************************************  
changed: [master]  
changed: [node1]  
changed: [node2]  
  
PLAY RECAP **********************************************************************************************************************  
master                     : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
node1                      : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
node2                      : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

配置k8s集群dns解析: k8s-hosts-cfg.yaml

  • 设置主机名
  • /etc/hosts文件中互相添加dns解析
- name: step03_cfg_host  
 hosts: k8s_cluster gather_facts: no tasks: - name: set hostname hostname: name: "{{ inventory_hostname }}" use: systemd - name: Add dns to each other lineinfile: path: /etc/hosts backup: yes line: "{{item.value.ansible_host}}  {{item.key}}" loop: "{{ hostvars | dict2items }}" loop_control: label: "{{ item.key }} {{ item.value.ansible_host }}"```  
  
**执行:**  
  
```bash  
$ ansible-playbook k8s-hosts-cfg.yaml  
  
PLAY [step03_cfg_host] **********************************************************************************************************  
  
TASK [set hostname] *************************************************************************************************************  
ok: [master]  
ok: [node1]  
ok: [node2]  
  
TASK [Add dns to each other] ****************************************************************************************************  
ok: [node2] => (item=node1 192.168.175.141)  
ok: [master] => (item=node1 192.168.175.141)  
ok: [node1] => (item=node1 192.168.175.141)  
ok: [node2] => (item=node2 192.168.175.142)  
ok: [master] => (item=node2 192.168.175.142)  
ok: [node1] => (item=node2 192.168.175.142)  
ok: [node2] => (item=master 192.168.175.140)  
ok: [master] => (item=master 192.168.175.140)  
ok: [node1] => (item=master 192.168.175.140)  
  
PLAY RECAP **********************************************************************************************************************  
master                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
node1                      : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
node2                      : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

配置yum源:k8s-yum-cfg.yaml

- name: step04_yum_cfg  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Create back-up directory file: path: /etc/yum.repos.d/org/ state: directory  
 - name: Back-up old Yum files shell: cmd: mv -f /etc/yum.repos.d/*.repo /etc/yum.repos.d/org/ removes: /etc/yum.repos.d/org/  
 - name: Add new Yum files copy: src: ./files_yum/ dest: /etc/yum.repos.d/  
 - name: Check yum.repos.d shell: cmd: ls /etc/yum.repos.d/*```  
  
### 时钟同步:k8s-time-sync.yaml  
  
```yml  
- name: step05_time_sync  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Start chronyd.service systemd: name: chronyd.service state: started enabled: yes  
 - name: Modify time zone & clock shell: | cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime clock -w hwclock -w  
 - name: Check time now command: date```  
  
### 禁用iptable、firewalld、NetworkManager服务  
  
```yml  
- name: step06_net_service  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Stop some services for net systemd: name: "{{ item }}" state: stopped enabled: no loop: - firewalld - iptables - NetworkManager```  
  
**执行:**  
  
```bash  
$ ansible-playbook -v k8s-net-service.yaml  
... ...  
failed: [master] (item=iptables) => {  
 "ansible_loop_var": "item", "changed": false, "item": "iptables"}  
  
MSG:  
  
Could not find the requested service iptables: host  
... ...  
  
PLAY RECAP **********************************************************************************************************************  
master                     : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  
node1                      : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  
node2                      : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

禁用SElinux、swap:k8s-SE-swap-disable.yaml

- name: step07_net_service  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: SElinux disabled lineinfile: path: /etc/selinux/config line: SELINUX=disabled regexp: ^SELINUX= state: present backup: yes  
 - name: Swap disabled lineinfile: path: /etc/fstab line: '#\1' regexp: '(^/dev/mapper/centos-swap.*$)' backrefs: yes state: present backup: yes```  
  
### 修改内核:k8s-kernel-cfg.yaml  
  
<!--系统重启后,模块可能需要重新加载,再执行一遍该剧本即可-->  
  
```yml  
- name: step08_kernel_cfg  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Create /etc/sysctl.d/kubernetes.conf copy: content: '' dest: /etc/sysctl.d/kubernetes.conf force: yes  
 - name: Cfg bridge and ip_forward lineinfile: path: /etc/sysctl.d/kubernetes.conf line: "{{ item }}" state: present loop: - 'net.bridge.bridge-nf-call-ip6tables = 1' - 'net.bridge.bridge-nf-call-iptables = 1' - 'net.ipv4.ip_forward = 1'  
 - name: Load cfg shell: cmd: | sysctl -p modprobe br_netfilter removes: /etc/sysctl.d/kubernetes.conf  
 - name: Check cfg shell: cmd: '[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3'```  
  
**执行:**  
  
```bash  
$ ansible-playbook -v k8s-kernel-cfg.yaml --step  
  
TASK [Check cfg] ****************************************************************************************************************  
changed: [master] => {  
 "changed": true, "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3", "delta": "0:00:00.011574", "end": "2022-02-27 04:26:01.332896", "rc": 0, "start": "2022-02-27 04:26:01.321322"}  
changed: [node2] => {  
 "changed": true, "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3", "delta": "0:00:00.016331", "end": "2022-02-27 04:26:01.351208", "rc": 0, "start": "2022-02-27 04:26:01.334877"}  
changed: [node1] => {  
 "changed": true, "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3", "delta": "0:00:00.016923", "end": "2022-02-27 04:26:01.355983", "rc": 0, "start": "2022-02-27 04:26:01.339060"}  
  
PLAY RECAP **********************************************************************************************************************  
master                     : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
node1                      : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
node2                      : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

配置ipvs:k8s-ipvs-cfg.yaml

- name: step09_ipvs_cfg  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Install ipset and ipvsadm yum: name: "{{ item }}" state: present loop: - ipset - ipvsadm  
 - name: Load modules shell: | modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4  
 - name: Check cfg shell: cmd: '[ $(lsmod | grep -e -ip_vs -e nf_conntrack_ipv4 | wc -l) -ge 2 ] && exit 0 || exit 3'```  
  
### 安装docker:k8s-docker-install.yaml  
  
```yml  
- name: step10_docker_install  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Install docker-ce yum: name: docker-ce-18.06.3.ce-3.el7 state: present  
 - name: Cfg docker copy: src: ./files_docker/daemon.json dest: /etc/docker/  
 - name: Start docker systemd: name: docker.service state: started enabled: yes     - name: Check docker version  
 shell: cmd: docker --version```  
  
### 安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml  
  
```yml  
- name: step11_k8s_install_kubepkgs  
 hosts: k8s_cluster gather_facts: no tasks:  
 - name: Install k8s components yum: name: "{{ item }}" state: present loop: - kubeadm-1.17.4-0 - kubelet-1.17.4-0 - kubectl-1.17.4-0  
 - name: Cfg k8s copy: src: ./files_k8s/kubelet dest: /etc/sysconfig/ force: no backup: yes  
 - name: Start kubelet systemd: name: kubelet.service state: started enabled: yes```  
  
### 安装集群镜像:k8s-apps-images.yaml  
  
```yml  
- name: step12_apps_images  
 hosts: k8s_cluster gather_facts: no  
 vars: apps: - kube-apiserver:v1.17.4 - kube-controller-manager:v1.17.4 - kube-scheduler:v1.17.4 - kube-proxy:v1.17.4 - pause:3.1 - etcd:3.4.3-0 - coredns:1.6.5 vars_prompt: - name: cfg_python prompt: Do you need to install docker pkg for python(Yes/No)? default: "no" private: no  
 tasks:  
 - block: - name: Install python-pip yum: name: python-pip state: present  
 - name: Install docker pkg for python shell: cmd: | pip install docker==4.4.4 pip install websocket-client==0.32.0 creates: /usr/lib/python2.7/site-packages/docker/ when: cfg_python | bool  
 - name: Pull images community.docker.docker_image: name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}" source: pull loop: "{{ apps }}"  
 - name: Tag images community.docker.docker_image: name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}" repository: "k8s.gcr.io/{{ item }}" force_tag: yes source: local loop: "{{ apps }}"  
 - name: Remove images for ali community.docker.docker_image: name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}" state: absent loop: "{{ apps }}"```  
  
**执行:**  
  
```bash  
$ ansible-playbook k8s-apps-images.yaml  
Do you need to install docker pkg for python(Yes/No)? [no]:  
  
PLAY [step12_apps_images] *******************************************************************************************************  
  
TASK [Install python-pip] *******************************************************************************************************  
skipping: [node1]  
skipping: [master]  
skipping: [node2]  
  
TASK [Install docker pkg for python] ********************************************************************************************  
skipping: [master]  
skipping: [node1]  
skipping: [node2]  
  
TASK [Pull images] **************************************************************************************************************  
changed: [node1] => (item=kube-apiserver:v1.17.4)  
changed: [node2] => (item=kube-apiserver:v1.17.4)  
changed: [master] => (item=kube-apiserver:v1.17.4)  
changed: [node1] => (item=kube-controller-manager:v1.17.4)  
changed: [master] => (item=kube-controller-manager:v1.17.4)  
changed: [node1] => (item=kube-scheduler:v1.17.4)  
changed: [master] => (item=kube-scheduler:v1.17.4)  
changed: [node1] => (item=kube-proxy:v1.17.4)  
changed: [node2] => (item=kube-controller-manager:v1.17.4)  
changed: [master] => (item=kube-proxy:v1.17.4)  
changed: [node1] => (item=pause:3.1)  
changed: [master] => (item=pause:3.1)  
changed: [node2] => (item=kube-scheduler:v1.17.4)  
changed: [node1] => (item=etcd:3.4.3-0)  
changed: [master] => (item=etcd:3.4.3-0)  
changed: [node2] => (item=kube-proxy:v1.17.4)  
changed: [node1] => (item=coredns:1.6.5)  
changed: [master] => (item=coredns:1.6.5)  
changed: [node2] => (item=pause:3.1)  
changed: [node2] => (item=etcd:3.4.3-0)  
changed: [node2] => (item=coredns:1.6.5)  
  
TASK [Tag images] ***************************************************************************************************************  
ok: [node1] => (item=kube-apiserver:v1.17.4)  
ok: [master] => (item=kube-apiserver:v1.17.4)  
ok: [node2] => (item=kube-apiserver:v1.17.4)  
ok: [node1] => (item=kube-controller-manager:v1.17.4)  
ok: [master] => (item=kube-controller-manager:v1.17.4)  
ok: [node2] => (item=kube-controller-manager:v1.17.4)  
ok: [master] => (item=kube-scheduler:v1.17.4)  
ok: [node1] => (item=kube-scheduler:v1.17.4)  
ok: [node2] => (item=kube-scheduler:v1.17.4)  
ok: [master] => (item=kube-proxy:v1.17.4)  
ok: [node1] => (item=kube-proxy:v1.17.4)  
ok: [node2] => (item=kube-proxy:v1.17.4)  
ok: [master] => (item=pause:3.1)  
ok: [node1] => (item=pause:3.1)  
ok: [node2] => (item=pause:3.1)  
ok: [master] => (item=etcd:3.4.3-0)  
ok: [node1] => (item=etcd:3.4.3-0)  
ok: [node2] => (item=etcd:3.4.3-0)  
ok: [master] => (item=coredns:1.6.5)  
ok: [node1] => (item=coredns:1.6.5)  
ok: [node2] => (item=coredns:1.6.5)  
  
TASK [Remove images for ali] ****************************************************************************************************  
changed: [master] => (item=kube-apiserver:v1.17.4)  
changed: [node2] => (item=kube-apiserver:v1.17.4)  
changed: [node1] => (item=kube-apiserver:v1.17.4)  
changed: [master] => (item=kube-controller-manager:v1.17.4)  
changed: [node1] => (item=kube-controller-manager:v1.17.4)  
changed: [node2] => (item=kube-controller-manager:v1.17.4)  
changed: [node1] => (item=kube-scheduler:v1.17.4)  
changed: [master] => (item=kube-scheduler:v1.17.4)  
changed: [node2] => (item=kube-scheduler:v1.17.4)  
changed: [master] => (item=kube-proxy:v1.17.4)  
changed: [node1] => (item=kube-proxy:v1.17.4)  
changed: [node2] => (item=kube-proxy:v1.17.4)  
changed: [node1] => (item=pause:3.1)  
changed: [master] => (item=pause:3.1)  
changed: [node2] => (item=pause:3.1)  
changed: [master] => (item=etcd:3.4.3-0)  
changed: [node1] => (item=etcd:3.4.3-0)  
changed: [node2] => (item=etcd:3.4.3-0)  
changed: [master] => (item=coredns:1.6.5)  
changed: [node1] => (item=coredns:1.6.5)  
changed: [node2] => (item=coredns:1.6.5)  
  
PLAY RECAP **********************************************************************************************************************  
master                     : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0  
node1                      : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0  
node2                      : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0  

k8s集群初始化:k8s-cluster-init.yaml

- name: step13_cluster_init  
 hosts: master gather_facts: no tasks: - block: - name: Kubeadm init shell: cmd: kubeadm init --apiserver-advertise-address={{ ansible_host }} --kubernetes-version=v1.17.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers  
 - name: Create /root/.kube file: path: /root/.kube/ state: directory owner: root group: root  
 - name: Copy /root/.kube/config copy: src: /etc/kubernetes/admin.conf dest: /root/.kube/config remote_src: yes backup: yes owner: root group: root  
 - name: Copy kube-flannel copy: src: ./files_k8s/kube-flannel.yml dest: /root/ backup: yes  
 - name: Apply kube-flannel shell: cmd: kubectl apply -f /root/kube-flannel.yml  
 - name: Get token shell: cmd: kubeadm token create --print-join-command register: join_token  
 - name: debug join_token debug: var: join_token.stdout```
文章作者: 子新
本文链接:
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 Subnew
devops ansible
喜欢就支持一下吧