**一、在添加monitor的時候報錯**
~~~
[wlwjfx25][DEBUG ] connected to host: WLWJFX32
[wlwjfx25][INFO ] Running command: ssh -CT -o BatchMode=yes wlwjfx25
[wlwjfx25][DEBUG ] connection detected need for sudo
sudo: sorry, you must have a tty to run sudo
[ceph_deploy][ERROR ] RuntimeError: connecting to host: wlwjfx25 resulted in errors: IOError cannot send (already closed?)
~~~
**解決方法:**
使用不同賬戶,執行執行腳本時候sudo經常會碰到 sudo: sorry, you must have a tty to run sudo這個情況,其實修改一下sudo的配置就好了
~~~
vi /etc/sudoers (最好用visudo命令)
注釋掉 Default requiretty 一行
#Default requiretty
~~~
意思就是sudo默認需要tty終端。注釋掉就可以在后臺執行了。
**二、 ceph-deploy mon create-initial 遇到錯誤**
admin_socket: exception getting command descriptions: [Errno 2] No such file or director
要在配置文件中加入以下內容:
~~~
[osd]
osd max object name len = 256 //這里必須寫,否則在創建mon會出錯
osd max object namespace len = 64 //同上
rbd default features = 1
~~~
**三、 ceph狀態為HEALTH_WARN **
~~~
[root@WLWJFX62 ~]# ceph -s
cluster e062ce71-bfb3-4895-8373-6203de2fa793
health HEALTH_WARN
too few PGs per OSD (10 < min 30)
monmap e1: 3 mons at {WLWJFX23=10.255.213.133:6789/0,WLWJFX24=10.255.213.134:6789/0,WLWJFX25=10.255.213.135:6789/0}
election epoch 10, quorum 0,1,2 WLWJFX23,WLWJFX24,WLWJFX25
mdsmap e7: 1/1/1 up {0=WLWJFX34=up:active}
osdmap e611: 145 osds: 145 up, 145 in
pgmap v1283: 512 pgs, 3 pools, 11667 bytes data, 20 objects
742 GB used, 744 TB / 785 TB avail
512 active+clean
~~~
執行ceph health 可得知:
~~~
[root@WLWJFX62 ~]# ceph health
HEALTH_WARN too few PGs per OSD (10 < min 30)
~~~
需調整 需要修改pg_num , pgp_num
1、查看 所擁有的pool
~~~
[root@WLWJFX23 ceph]# ceph osd pool stats
pool rbd id 0
nothing is going on
pool fs_data id 3
nothing is going on
pool fs_metadata id 4
nothing is going on
~~~
2、獲取對應pool的pg_num和pgp_num.值
~~~
ceph osd pool get fs_data pg_num
ceph osd pool get fs_data pgp_num
ceph osd pool get fs_metadata pg_num
ceph osd pool get fs_metadata pgp_num
~~~
2、修改pool對應的pg_num和pgp_num.
~~~
ceph osd pool set fs_data pg_num 512
ceph osd pool set fs_data pgp_num 512
ceph osd pool set fs_metadata pg_num 512
ceph osd pool set fs_metadata pgp_num 512
~~~
通過ceph -s檢查:
~~~
[root@WLWJFX23 ceph]# ceph -s
cluster e062ce71-bfb3-4895-8373-6203de2fa793
health HEALTH_WARN
too few PGs per OSD (26 < min 30)
monmap e1: 3 mons at {WLWJFX23=10.255.213.133:6789/0,WLWJFX24=10.255.213.134:6789/0,WLWJFX25=10.255.213.135:6789/0}
election epoch 10, quorum 0,1,2 WLWJFX23,WLWJFX24,WLWJFX25
mdsmap e7: 1/1/1 up {0=WLWJFX34=up:active}
osdmap e627: 145 osds: 145 up, 145 in
pgmap v1352: 1280 pgs, 3 pools, 11667 bytes data, 20 objects
742 GB used, 744 TB / 785 TB avail
1280 active+clean
~~~
若還出現 too few PGs per OSD (26 < min 30) 報錯,則pg_num和pgp_num還需增加,設定的值最好是2的**整數冪**
3、需要注意, pg_num只能增加, 不能縮小.
~~~
[root@mon1 ~]# ceph osd pool set rbd pg_num 64
Error EEXIST: specified pg_num 64 <= current 128
~~~
四、創建osd時報錯:
[ceph_deploy][ERROR ] RuntimeError: bootstrap-osd keyring not found; run 'gatherkeys'
登錄跳板節點:
ceph-deploy gatherkeys WLWJFX{64..72}
注意問題:
1、ceph 10版本會將glib庫的版本要求你為centos-1611的版本,否則會出現不兼容的報錯。
2、檢查各主機時鐘是否一致
3、
[root@xhw342 ~]# yum -y install yum-plugin-priorities
Loaded plugins: fastestmirror
CentOS7_1611-media | 3.6 kB 00:00:00
ZStack | 3.6 kB 00:00:00
ceph-jewel | 2.9 kB 00:00:00
ceph-jewel_deprpm | 2.9 kB 00:00:00
ceph-jewel_noarch | 2.9 kB 00:00:00
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable <repoid>
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again
解決方法:
進入/etc/yum.repos.d中刪除epel.repo和epel-testing.repo
4、
[xhw342][DEBUG ] Configure Yum priorities to include obsoletes
[xhw342][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[xhw342][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[xhw342][WARNIN] curl: (6) Could not resolve host: download.ceph.com; Unknown error
[xhw342][WARNIN] error: https://download.ceph.com/keys/release.asc: import read failed(2).
[xhw342][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm --import https://download.ceph.com/keys/release.asc
- 一、日常運維腳本
- 1.1 批量創建用戶并賦予sudo權限
- 1.2 批量主機建立互信
- 1.3create_images.sh
- 1.4monitor.sh
- 1.5ftp腳本
- 1.6格式化分區
- 1.7簡單的遠程執行腳本
- 二、常用服務使用
- 1 ceph (分布式文件系統)
- 1.1 ceph 簡介
- 1.2 準備環境
- 1.3 開始部署
- 1.4 cephfs掛載
- 1.5 RBD塊存儲
- 1.6 object 對象存儲
- 1.7 集群擴展
- 1.7.1 增加刪除MON
- 1.7.2 增加刪除OSD
- 1.7.3 刪除MDS
- 注意事項
- 遇到的問題
- 1.8ceph查找數據文件
- 1.9卸載并清理環境
- 2、mysql (數據庫)
- 2.1 搭建
- 2.2 使用教程
- 2.2.1 mysql基礎配置
- 2.2.1.1 用戶權限管理
- 2.2.1.2用戶資源限制
- 2.2.1.3 密碼管理
- 2.2.1.4用戶lock
- 2.2.2mysql語法詳解
- 2.2.1建庫、表語句
- 2.2.2.2 插入 insert
- 2.2.2.3更新 update
- 2.2.2.4刪除 delete
- 2.2.2.5查詢 select
- 2.2.6視圖 索引 view index
- 2.2.7 修改 alert
- 2.2.2.8清理 truncate drop
- 2.2.9重命名 rename
- 示例語句
- 2.2.3mysql常用函數
- 2.3.1 對比操作符統概
- 2.3.2對比操作符詳解
- 2.3.3邏輯操作符
- 2.2.4分配操作符
- 2.2.5流程控制函數
- 2.2.6字符串函數
- 2.2.7字符串對比函數
- 2.2.8數字函數
- 2.2.9日期和時間函數
- 2.2.10聚合/格式轉換函數
- 2.2.11 子查詢
- 示例語句
- 2.2.4 mysql 高級應用
- 2.2.4.1 存儲過程 函數
- 2.2.4.2流程控制
- 2.2.4.3游標
- 2.2.4.4觸發器
- 課堂練習
- 2.2.2.5 數據庫設計
- 2.2.5.1 數據類型
- 2.2.5.2存儲引擎
- 2.2.6Innodb內核
- 1、innodb事務和多版本控制
- 2、體系結構
- 3、InnoDB配置
- 4、buffer pool設置
- 5、其他配置
- innodb限制
- 2.7 字符集
- 2.8鎖機制和事務
- 2.8.1鎖機制
- 2.8.2事務
- 2.9分區
- 2.9.1 自動分區
- 2.10復制
- 2.11mysql搬移數據目錄
- 2.12組復制 GR
- 簡介
- 搭建
- 2.3日常運維
- 2.3.1定時任務
- 2.4mycat
- 2.4.1 報錯分析
- 2.4.2 修改字符集
- 2.11 mycat使用
- 2.5遇到問題
- 2.5.1 表名庫名忽略大小寫
- 3、PAAS平臺搭建
- 問題匯總
- 1、docker
- 2、日常運維
- 3.1 Kubernetes
- 3.1 kubernetes 高版本搭建
- 4、GlusterFS搭建
- 5、MooseFS搭建
- 5.1搭建
- 5.2運維
- 5.2.1 mfs日志解析
- 5.2.2清理mfs的垃圾數據
- 5.2.3元數據故障恢復
- 5.2.4 MFS優化
- 5.2.5 配置機架感知
- 5.2.6 客戶端工具集
- 6、集群切換命令
- 7、ntp服務
- 8、monggoDB
- 8.1搭建單機
- 2、搭建集群及分片
- 9、MariaDB Galera Cluster
- 9.1源碼安裝MariaDB
- 9.2galera cluster 優劣
- 9.3 rpm安裝mariadb
- 10 HAproxy1.7搭建
- 11、sysbench 搭建使用
- 0.5版本
- 12 percona-xtradb-cluster
- 13http服務相關
- 13.1 http狀態碼解析
- 14 zookeeper
- 14.1 zookeeper日志查看
- 14.2 配置解析
- 14.3 優化
- 15搭建私有pip源
- 16/var/log的日志文件解釋
- 15 ansible的搭建及使用
- 15.1 搭建
- 15.2 使用說明
- 16. 搭建本地yum源
- zookeeper
- 優化
- 四、開發語言
- 1、GO語言
- 1.1go簡介
- 1.1.1hello_world初識GO
- 1.1.2并發介紹
- 1.1.3 chan介紹
- 1.1.4多返回值
- 1.2go基礎
- 1.2.1數據類型
- 1.2.2 go基礎結構
- 1.2.3 const及變量介紹
- 1.2.3os和time介紹
- 1.2.4 字符串
- 1.2.5條件判斷
- 1.2.6 homework
- go--help
- 1.3 go基礎2
- 1.3.1 數組 array
- 1.3.2切片 slice
- 1.3.3 時間和日期
- 1.3.4指針類型
- 1.3.5函數
- 1.3.6可變參數
- 1.3.7 defer
- 1.3.8遞歸
- 1.9閉包
- 1.10 map
- 1.11 sort
- 1.12 struct 結構體
- 2.perl語言
- 2.1 安裝lib包
- 3 python
- 1.語言基礎
- 2、編程教學
- 2.1變量和序列
- 2.2 條件語句