公司来了8台新服务器,计划组成一个小规模集群架构;其中有一台服务器作为批量管理服务器使用,其余7台则是业务架构所需。现在先由你来负责服务器的前期配置工作,现要求如下:[x]从管理服务器ssh连接到其他任何服务器时进行免密码的密钥认证,要求进行批量分发...
公司来了8台新服务器,计划组成一个小规模集群架构;其中有一台服务器作为批量管理服务器使用,其余7台则是业务架构所需。现在先由你来负责服务器的前期配置工作,
现要求如下:
[x] 从管理服务器ssh连接到其他任何服务器时进行免密码的密钥认证,要求进行批量分发。(脚本实现批量分发)
[x] 由于没有DNS解析服务器,所以各个服务器需要进行hosts的服务器地址解析,因此,需要进行/etc/hosts文件的批量分发(ansible实现文件批量分发)
[x] 新服务器初期需要做简单的优化(服务器优化脚本)和yum仓库的搭建(epel.repo源)。(ansible实现脚本的批量分发和批量执行)
[root@m01 ~]# cat /etc/redhat-release
CentOS release 6.8 (Final)
[root@m01 ~]# uname -r
2.6.32-642.el6.x86_64
主机网络参数设置:
主机名 | 网卡eth0 | 网卡eth1 | 用途 |
---|---|---|---|
lb01 | 10.0.0.5/24 | 172.16.1.5/24 | A1-nginx负载均衡服务器01 |
lb02 | 10.0.0.6/24 | 172.16.1.6/24 | A2-nginx负载均衡服务器02 |
web02 | 10.0.0.7/24 | 172.16.1.7/24 | B1-apache web服务器 |
web01 | 10.0.0.8/24 | 172.16.1.8/24 | B2-nginx web服务器 |
db01 | 10.0.0.51/24 | 172.16.1.51/24 | C3-mysql数据库服务器 |
nfs01 | 10.0.0.31/24 | 172.16.1.31/24 | C1-NFS存储服务器 |
backup | 10.0.0.41/24 | 172.16.1.41/24 | C2-rsync存储服务器 |
m01 | 10.0.0.61/24 | 172.16.1.61/24 | X-管理服务器 |
下载epel源并更新yum仓库
[root@m01 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
[root@m01 ~]# yum -y clean all
[root@m01 ~]# yum makecache
安装sshpass工具
[root@m01 ~]# yum -y install sshpass
免交互创建密钥对
[root@m01 ~]Generating public/private dsa key pair. Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is:4d:01:91:98:be:02:89:ab:ce:63:4f:81:e3:ab:0b:f8 root@m01 The key's randomart image is:+--[ DSA 1024]----+ | oo+. | | o . . | | . . . . | |. + . o | | + o .S . | |+ . o . | |+. . . | |++o | |*=E. | +-----------------+ [root@m01 ~]authorized_keys id_dsa id_dsa.pub known_hosts 命令说明: ssh-keygen:生成密钥对命令 -t:指定密钥对的密码加密类型(rsa,dsa两种) -f:指定密钥对文件的生成路径包含文件名 -P(大写):指定密钥对的密码
[root@m01 ~]
Now try logging into the machine, with "ssh '-o StrictHostKeyChecking=no root@172.16.1.31'", and check in: .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root@m01 ~]
命令说明: sshpass:专为ssh连接服务的免交户工具 -p :指定登录的密码 ssh-copy-id:自动分发公钥的工具 -i:指定公钥路径 -o StrictHostKeyChecking=no :不进行对方主机信息的写入(第一次ssh连接会在know_hosts文件里记录)
[root@m01 ~]# ssh root@172.16.1.31 #测试成功,免密码ssh连接Last login: Tue Mar 14 21:49:58 2017 from 172.16.1.1
[root@nfs01 ~]#
#!/bin/bash # author:Mr.chen # 2017-3-14 # description:SSH密钥批量分发 User=root passWord=##Linux登录密码 function YumBuild(){ echo "正在安装epel源yum仓库,请稍后..." cd /etc/yum.repos.d/ &&\ [ -d bak ] || mkdir bak [ `find ./*.* -type f | wc -l` -gt 0 ] && find ./*.* -type f | xargs -i mv {} bak/ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo &>/dev/null yum -y clean all &>/dev/null yum makecache &>/dev/null } echo "正在进行网络连接测试,请稍后..." ping www.baidu.com -c2 >/dev/null ||(echo "无法连同外网,本脚本运行环境必须和外网相连!" && exit) [ $# -eq 0 ] && echo "没有参数!格式为:sh $0 参数1...n" && exit rpm -q sshpass &>/dev/null || yum -y install sshpass &>/dev/null if [ $? -gt 0 ];then YumBuild yum -y install sshpass &>/dev/null || (echo "sshpass build error!" && exit) fi [ -d ~/.ssh ] || mkdir ~/.ssh;chmod 700 ~/.ssh echo "正在创建密钥对...." rm -rf ~/.ssh/id_dsa ~/.ssh/id_dsa.pub ssh-keygen -t dsa -f ~/.ssh/id_dsa -P "" &>/dev/null for ip in $* do ping $ip -c1 &>/dev/null if [ $? -gt 0 ];then echo "$ip无法ping通请检查网络" continue fi sshpass -p "$passWord" ssh-copy-id -i ~/.ssh/id_dsa.pub "-o StrictHostKeyChecking=no ${User}@$ip" &>/dev/null echo "$ip 密钥分发成功" done
特别提示:
脚本内容仅作思路开拓之用!
想学好shell或者编程,光看是没用的;
1,学(基础)
2,看(思路)
3,仿(写法)
4,练(课外)
切记....
[root@m01 yum.repos.d]# sh /server/scripts/ssh_key.sh 172.16.1.5 172.16.1.6 172.16.1.7 172.16.1.8 172.16.1.51 172.16.1.31 172.16.1.41 172.16.1.61
正在进行网络连接测试,请稍后...
正在创建密钥对....
172.16.1.5无法ping通请检查网络
172.16.1.6无法ping通请检查网络
172.16.1.7 密钥分发成功
172.16.1.8 密钥分发成功
172.16.1.51无法ping通请检查网络
172.16.1.31 密钥分发成功
172.16.1.41 密钥分发成功
172.16.1.61 密钥分发成功
备注:
故意少开了3台,脚本测试成功。
需要epel.repo源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
yum -y install ansible
配置/etc/ansible/hosts文件
[root@m01 ~]# tail -8 /etc/ansible/hosts
[chensiqi]
172.16.1.31
172.16.1.41
172.16.1.51
172.16.1.5
172.16.1.6
172.16.1.7
172.16.1.8
由于已经配置过免密码的密钥认证了,所以/etc/ansible/hosts的主机映射文件只要加入被管理主机的IP地址就可以了。
[root@m01 ~]# ansible chensiqi -m command -a "w"
172.16.1.6 | SUCCESS | rc=0 >>
08:47:40 up 12 min, 1 user, load average: 0.00, 0.01, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 m01 08:47 0.00s 0.27s 0.01s /bin/sh -c /usr
172.16.1.41 | SUCCESS | rc=0 >>
22:48:28 up 1 day, 3:37, 2 users, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - Sat03 1:15m 0.15s 0.15s -bash
root pts/0 m01 22:48 1.00s 0.33s 0.00s /bin/sh -c /usr
172.16.1.51 | SUCCESS | rc=0 >>
08:47:41 up 13 min, 1 user, load average: 0.08, 0.03, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 m01 08:47 1.00s 0.29s 0.00s /bin/sh -c /usr
172.16.1.31 | SUCCESS | rc=0 >>
10:27:47 up 15:47, 2 users, load average: 0.16, 0.05, 0.06
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - Mon20 20:56m 0.15s 0.15s -bash
root pts/0 m01 10:27 0.00s 0.26s 0.00s /bin/sh -c /usr
172.16.1.5 | SUCCESS | rc=0 >>
08:47:41 up 12 min, 1 user, load average: 0.00, 0.01, 0.03
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 m01 08:47 0.00s 0.20s 0.00s /bin/sh -c /usr
172.16.1.7 | SUCCESS | rc=0 >>
21:03:00 up 10:03, 2 users, load average: 0.05, 0.05, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - 11:00 2:03m 0.14s 0.14s -bash
root pts/0 m01 21:02 1.00s 0.18s 0.00s /bin/sh -c /usr
172.16.1.8 | SUCCESS | rc=0 >>
10:27:48 up 14:31, 2 users, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 - Sat09 20:03m 0.10s 0.10s -bash
root pts/0 m01 10:27 1.00s 0.16s 0.00s /bin/sh -c /usr
[root@m01 ~]# ansible chensiqi -m copy -a "src=/etc/hosts dest=/etc/hosts backup=yes" #backup=yes 如果目标存在文件,那么覆盖前是否备份目标文件
172.16.1.51 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446564.45-249855699288208/source",
"state": "file",
"uid": 0
}
172.16.1.31 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446564.26-6373581674916/source",
"state": "file",
"uid": 0
}
172.16.1.41 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446564.37-90309519963188/source",
"state": "file",
"uid": 0
}
172.16.1.5 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446564.91-218095487370821/source",
"state": "file",
"uid": 0
}
172.16.1.6 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446564.92-48667872204035/source",
"state": "file",
"uid": 0
}
172.16.1.8 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446566.37-188264096277764/source",
"state": "file",
"uid": 0
}
172.16.1.7 | SUCCESS => {
"changed": true,
"checksum": "dba0126bf49ea8d4cdc476828f9edb37085c6afe",
"dest": "/etc/hosts",
"gid": 0,
"group": "root",
"md5sum": "09bad48d0c62411850fd04b68f836335",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:net_conf_t:s0",
"size": 294,
"src": "/root/.ansible/tmp/ansible-tmp-1489446566.39-64165112131501/source",
"state": "file",
"uid": 0
}
特别提示:
如果目标路径存在文件,并且目标文件和你想要copy的文件完全相同的话,也会导致ansilbe的copy功能失效
#!/bin/bash # author:Mr.chen # 2017-3-15 # description:服务器初期优化脚本+epel源yum仓库搭建 function ServerSystemOptimize(){ echo "脚本开始尝试对服务器进行一些必要的优化...." && sleep 2 /etc/init.d/iptables stop &>/dev/null && echo "防火墙已经关闭!" && sleep 1 setenforce 0 &>/dev/null && echo "SElinux 已关闭!" || echo "SElinux未开启!" chkconfig iptables off && echo "防火墙已经取消开机启动!"&& sleep 1 sed -i '7 s/enforcing/disabled/g' /etc/selinux/config && echo "SElinux已经取消开机启动!"&& sleep 1 A=`awk '/id:/ {print NR,$0}' /etc/inittab | awk '{print $1}'` sed -i "$A s/5/3/g" /etc/inittab && echo "Linux启动运行级别已经永久设置为3!" && sleep 1 chkconfig --list | egrep -v "rsyslog|network|crond|sysstat|sshd" | awk '{print "chkconfig",$1,"off"}' | bash &>/dev/null && echo "脚本已经关闭Linux不必要服务的开机自启动!" && sleep 1 } function YumBuild(){ echo "正在安装epel源yum仓库,请稍后..." cd /etc/yum.repos.d/ &&\ [ -d bak ] || mkdir bak [ `find ./*.* -type f | wc -l` -gt 0 ] && find ./*.* -type f | xargs -i mv {} bak/ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo &>/dev/null yum -y clean all &>/dev/null yum makecache &>/dev/null } echo "脚本正在进行网络连接测试,请稍后..." ping www.baidu.com -c2 &>/dev/null ||(echo "无法连同外网,或者DNS解析有问题,本脚本运行环境必须和外网相连!" && exit) YumBuild ServerSystemOptimize
[root@m01 ~]# sh /server/scripts/server_uptimize.sh
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
[root@m01 ~]# ansible chensiqi -m copy -a "src=/server/scripts/server_uptimize.sh dest=/server/scripts/ backup=yes"
172.16.1.6 | SUCCESS => {
"changed": true,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"md5sum": "efeaffe8266992c190c1055241458259",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"src": "/root/.ansible/tmp/ansible-tmp-1489449184.22-105813674245985/source",
"state": "file",
"uid": 0
}
172.16.1.5 | SUCCESS => {
"changed": true,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"md5sum": "efeaffe8266992c190c1055241458259",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"src": "/root/.ansible/tmp/ansible-tmp-1489449184.22-102726815232979/source",
"state": "file",
"uid": 0
}
172.16.1.51 | SUCCESS => {
"changed": true,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"md5sum": "efeaffe8266992c190c1055241458259",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"src": "/root/.ansible/tmp/ansible-tmp-1489449184.26-180721242166387/source",
"state": "file",
"uid": 0
}
172.16.1.41 | SUCCESS => {
"changed": false,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"mode": "0644",
"owner": "root",
"path": "/server/scripts/server_uptimize.sh",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"state": "file",
"uid": 0
}
172.16.1.31 | SUCCESS => {
"changed": false,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"mode": "0644",
"owner": "root",
"path": "/server/scripts/server_uptimize.sh",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"state": "file",
"uid": 0
}
172.16.1.8 | SUCCESS => {
"changed": false,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"mode": "0644",
"owner": "root",
"path": "/server/scripts/server_uptimize.sh",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"state": "file",
"uid": 0
}
172.16.1.7 | SUCCESS => {
"changed": false,
"checksum": "9d508da8cce8830722ac38ad274361601d33f43e",
"dest": "/server/scripts/server_uptimize.sh",
"gid": 0,
"group": "root",
"mode": "0644",
"owner": "root",
"path": "/server/scripts/server_uptimize.sh",
"secontext": "system_u:object_r:default_t:s0",
"size": 1600,
"state": "file",
"uid": 0
[root@m01 ~]# ansible chensiqi -m shell -a "sh /server/scripts/server_uptimize.sh"
172.16.1.5 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
172.16.1.6 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
172.16.1.31 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
172.16.1.41 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
172.16.1.51 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
172.16.1.8 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
172.16.1.7 | SUCCESS | rc=0 >>
脚本正在进行网络连接测试,请稍后...
正在安装epel源yum仓库,请稍后...
*****************脚本开始尝试对服务器进行一些必要的优化....**********************
防火墙已经关闭!
SElinux 已关闭!
防火墙已经取消开机启动!
SElinux已经取消开机启动!
Linux启动运行级别已经永久设置为3!
脚本已经关闭Linux不必要服务的开机自启动!
全文详见:http://xpxw.com/?id=42