好记性不如烂笔头,作个记录以便日后参考

  • 简介
    • Replica Set(副本集): 有自动故障恢复功能主动集群,它相对于主从集群最主要的区别就是没有固定主节点,当主节点故障时集群会自动选举出主节点。Rplica Set总有一个主节点(Primary)和一个或者多个从节点(Secondary)组成。Replica Set 通过集群中数据异步复制达到故障转移实现数据冗余,多台服务器同一时时刻只有Primary提供写操作;并把读操作分发给slave,从而保障数据的一致性。
    • Sharding(分片):分片即对数据进行拆分,将其分散存储放在不同的服务器上。MongoDB支持自动分片。
    • 目标:
        1. 通过副本集达到数据的高可用
        1. 通过Sharding提高数据写入性能
        1. 通过keepalived 实现VIP达到服务的高可用
  • 服务器环境:

    • CentOS 7.4
      1. mongodb01 172.30.0.81
      1. mongodb02 172.30.0.82
      1. mongodb03 172.30.0.83
    • VIP: 172.30.0.80

    • 软件:

      • MongoDB 3.6.4
      • keepalived
  • 软件安装:

    • 增加MongoDB yum库安装,MongoDB和keepalived
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
     vi /etc/yum.repos.d/mongodb-3.6.repo
    [mongodb-org-3.6]
    name=MongoDB Repository
    baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
    gpgcheck=1
    enabled=1
    gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
    yum clean expire-cache
    yum install -y mongodb-org-3.6.4 mongodb-org-server-3.6.4 mongodb-org-shell-3.6.4 mongodb-org-mongos-3.6.4 mongodb-org-tools-3.6.4
    yum install epel-release -y && yum install keepalived -y
  • 配置Replica Set(副本集)

    • mongodb81
    • mkdir -p /data/replica1_1
    • mkdir -p /data/replica2_1
    • 创建keyfile
      • openssl rand -base64 > /data/keyfile
    • 复制keyfile到82、83的/data下
    • mongodb82
    • mkdir -p /data/replica1_2
    • mkdir -p /data/replica2_2
    • mongodb83
    • mkdir -p /data/replica1_3
    • mkdir -p /data/replica2_3

    • 修改配置文件

      • mongod.conf

        1
        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        # where to write logging data.
        systemLog:
        destination: file
        logAppend: true
        path: /var/log/mongodb/mongod.log

        # Where and how to store data.
        storage:
        dbPath: /data/replica1_1
        journal:
        enabled: true
        directoryPerDB: true
        # how the process runs
        processManagement:
        fork: true # fork and run in background
        pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
        timeZoneInfo: /usr/share/zoneinfo

        # network interfaces
        net:
        port: 20000
        bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.
        security:
        authorization: enabled
        keyFile: /data/keyfile

        #operationProfiling:

        replication:
        replSetName: shard1

        sharding:
        clusterRole: shardsvr
      • 82、83更改存储路径即可

      • 3台都执行 systemctl enable mongod && systemctl start mongod
      • mongod2.conf
        • 修改81、82、83端口为20001和路径
        • 3台都执行 systemctl enable mongod2 && systemctl start mongod2

注意在增加用户密码前先不开启验证 authorization: disabled

- 初始化副本集,默认priority为1,设置81 priority为5,优先为primary
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171

rs.initiate(
{
_id : "shard1",
members: [
{ _id : 0, host : "172.30.0.81:20000",priority: 5},
{ _id : 1, host : "172.30.0.82:20000" },
{ _id : 2, host : "172.30.0.83:20000" }
]
}
)

# 查看集群状态
shard1:PRIMARY> rs.status()
{
"set" : "shard1",
"date" : ISODate("2018-05-23T04:21:42.444Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1527049302, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1527049302, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1527049302, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1527049302, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "172.30.0.81:20000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 74604,
"optime" : {
"ts" : Timestamp(1527049302, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T04:21:42Z"),
"electionTime" : Timestamp(1526975667, 1),
"electionDate" : ISODate("2018-05-22T07:54:27Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "172.30.0.82:20000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 73645,
"optime" : {
"ts" : Timestamp(1527049292, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1527049292, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T04:21:32Z"),
"optimeDurableDate" : ISODate("2018-05-23T04:21:32Z"),
"lastHeartbeat" : ISODate("2018-05-23T04:21:41.041Z"),
"lastHeartbeatRecv" : ISODate("2018-05-23T04:21:41.039Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.16.0.81:20000",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.30.0.83:20000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 68986,
"optime" : {
"ts" : Timestamp(1527049292, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1527049292, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T04:21:32Z"),
"optimeDurableDate" : ISODate("2018-05-23T04:21:32Z"),
"lastHeartbeat" : ISODate("2018-05-23T04:21:41.042Z"),
"lastHeartbeatRecv" : ISODate("2018-05-23T04:21:41.041Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.30.0.81:20000",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1527049302, 1)
}

# 查看配置
shard1:PRIMARY> rs.conf();
{
"_id" : "shard1",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "172.30.0.81:20000",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 5,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "172.30.0.82:20000",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "172.30.0.83:20000",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5b03cca8ce9b562609a9269c")
}
}

config server配置服务器

  • 在81、82、83上增加以下配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
mkdir /data/config && chown -R mongod.mongod /data/config

vi configd.conf
# configd.conf
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/configd.log

# Where and how to store data.
storage:
dbPath: /data/config
journal:
enabled: true

# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/configd.pid # location of pidfile

# network interfaces
net:
bindIp: 0.0.0.0
port: 30000

security:
authorization: enabled
keyFile: /data/keyfile
sharding:
clusterRole: configsvr
replication:
replSetName: configd

#初始化配置副本集
rs.initiate(
{
_id : "configd",
members: [
{ _id : 0, host : "172.30.0.81:30000",priority: 5},
{ _id : 1, host : "172.30.0.82:30000" },
{ _id : 2, host : "172.30.0.83:30000" }
]
}
)
#查看状态
rs.status()

mongos server配置服务器

  • 在81、82、83上增加以下配置
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    vi /etc/mongos.conf
    # mongos.conf

    # for documentation of all options, see:
    # http://docs.mongodb.org/manual/reference/configuration-options/

    # where to write logging data.
    systemLog:
    destination: file
    logAppend: true
    path: /var/log/mongodb/mongos.log


    # how the process runs
    processManagement:
    fork: true # fork and run in background
    pidFilePath: /var/run/mongodb/mongos.pid # location of pidfile

    # network interfaces
    net:
    port: 40000
    bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

    security:
    keyFile: /data/keyfile
    clusterAuthMode: keyFile

    #configure for mongos server
    sharding:
    configDB: configd/172.30.0.81:30000,172.30.0.82:30000,172.30.0.83:30000
    systemctl enable mongos && systemctl start mongos

    # 登陆mongos增加分片
    mongo 172.30.0.80:40000/admin

    sh.addShard("shard1/172.30.0.81:20000"),172.30.0.82:20000,172.30.0.83:20000")
    sh.addShard("shard2/172.30.0.81:20001"),172.30.0.82:20001,172.30.0.83:20001")
    # 查看状态
    sh.status()

配置keepalived

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
脚本检测mongos如果进程停止则切换VIP
在81、81、83上分别增加配置文件如下:
MongoDB81
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id mongodb80
}

vrrp_script chk_mongos {
script "/usr/local/bin/check_mongos.sh"
interval 5
}

vrrp_instance MongoDB80 {
state MASTER
interface eth0
virtual_router_id 80
priority 150
advert_int 5
nopreempt

authentication {
auth_type PASS
auth_pass eddiewen20180522
}

virtual_ipaddress {
172.30.0.80
}
track_interface {
eth0
}

track_script {
chk_mongos
}

}

MongoDB82
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id mongodb80
}

vrrp_script chk_mongos {
script "/usr/local/bin/check_mongos.sh"
interval 5
}

vrrp_instance MongoDB80 {
state BACKUP
interface eth0
virtual_router_id 80
priority 100
advert_int 5
nopreempt

authentication {
auth_type PASS
auth_pass eddiewen20180522
}

virtual_ipaddress {
172.30.0.80
}
track_interface {
eth0
}

track_script {
chk_mongos
}
}

MongoDB83
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id mongodb80
}

vrrp_script chk_mongos {
script "/usr/local/bin/check_mongos.sh"
interval 5
}

vrrp_instance MongoDB80 {
state BACKUP
interface eth0
virtual_router_id 80
priority 90
advert_int 5
nopreempt

authentication {
auth_type PASS
auth_pass eddiewen20180522
}

virtual_ipaddress {
172.30.0.80
}
track_interface {
eth0
}

track_script {
chk_mongos
}
}

vi /usr/local/bin/check_mongos.sh
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
count=`ps -aux | grep -v grep | grep mongos | wc -l`
if [ $count -gt 0 ]; then
exit 0
else
pkill keepalived
fi

systemctl enable keepalived && systemctl start keepalived

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 7a:2b:82:b7:2b:d7 brd ff:ff:ff:ff:ff:ff
inet 172.30.0.82/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.30.0.80/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::782b:82ff:feb7:2bd7/64 scope link
valid_lft forever preferred_lft forever

相关配置

  • WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.We suggest setting it to ‘never’
  • WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.We suggest setting it to ‘never’

  • 增加以下内容 /etc/rc.local

    • echo “never” > /sys/kernel/mm/transparent_hugepage/enabled
    • echo “never” > /sys/kernel/mm/transparent_hugepage/defrag
  • 修改文件描述符和进程配置

    1
    2
    3
    4
    5
    6
    vim /etc/security/limits.conf
    mongod soft nofile 64000
    mongod hard nofile 64000
    mongod soft nproc 32000
    mongod hard nproc 32000
    systemctl restart mongod

最后更新: 2023年08月27日 03:06

原始链接: https://blog.icanwen.com/2018/05/23/MongoDB-replicaset-sharding/