linux 学习笔记-085-NoSQL-mongodb 分片介绍,搭建及其测试,mongodb 备份恢复

发布于 2018-05-14  234 次阅读


mongodb 分片介绍

分片就是将数据库进行拆分,将大型集合分隔到不同服务器上。比如,本来 100G 的数据,可以分割成 10 份存储到 10 台服务器上,这样每台机器只有 10G 的数据。

通过一个 mongos 的进程(路由)实现分片后的数据存储与访问,也就是说 mongos 是整个分片架构的核心,对客户端而言是不知道是否有分片的,客户端只需要把读写操作转达给 mongos 即可。

虽然分片会把数据分隔到很多台服务器上,但是每一个节点都是需要有一个备用角色的,这样能保证数据的高可用。

当系统需要更多空间或者资源的时候,分片可以让我们按需方便扩展,只需要把 mongodb 服务的机器加入到分片集群中即可

mongodb 分片架构图

linux 学习笔记-085-NoSQL-mongodb 分片介绍,搭建及其测试,mongodb 备份恢复

mongodb 分片相关概念

mongos: 数据库集群请求的入口,所有的请求都通过 mongos 进行协调,不需要在应用程序添加一个路由选择器,mongos 自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的 shard 服务器上。在生产环境通常有多 mongos 作为请求的入口,防止其中一个挂掉所有的 mongodb 请求都没有办法操作。

config server: 配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos 本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos 第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据,防止数据丢失!

shard: 存储了一个集合部分数据的 MongoDB 实例,每个分片是单独的 mongodb 服务或者副本集,在生产环境中,所有的分片都应该是副本集

mongodb 分片搭建

服务器规划

3 台机器:A(240)、B(242)、C(243)

A 搭建:mongos、config server、副本集 1 主节点、副本集 2 仲裁、副本集 3 从节点

B 搭建:mongos、config server、副本集 1 从节点、副本集 2 主节点、副本集 3 仲裁

C 搭建:mongos、config server、副本集 1 仲裁、副本集 2 从节点、副本集 3 主节点

端口分配:mongos 20000、config 21000、副本集 1 27001、副本集 2 27002、副本集 3 27003

3 台机器全部关闭 firewalld 服务和 selinux,或者增加对应端口的规则

创建相关目录

[root@am-01:~#] mkdir -p /data/mongodb/mongos/log

[root@am-01:~#] mkdir -p /data/mongodb/config/{data,log}

[root@am-01:~#] mkdir -p /data/mongodb/shard1/{data,log}

[root@am-01:~#] mkdir -p /data/mongodb/shard2/{data,log}

[root@am-01:~#] mkdir -p /data/mongodb/shard3/{data,log}
[root@am-02:~#] mkdir -p /data/mongodb/mongos/log

[root@am-02:~#] mkdir -p /data/mongodb/config/{data,log}

[root@am-02:~#] mkdir -p /data/mongodb/shard1/{data,log}

[root@am-02:~#] mkdir -p /data/mongodb/shard2/{data,log}

[root@am-02:~#] mkdir -p /data/mongodb/shard3/{data,log}
[root@am-03:~#] mkdir -p /data/mongodb/mongos/log

[root@am-03:~#] mkdir -p /data/mongodb/config/{data,log}

[root@am-03:~#] mkdir -p /data/mongodb/shard1/{data,log}

[root@am-03:~#] mkdir -p /data/mongodb/shard2/{data,log}

[root@am-03:~#] mkdir -p /data/mongodb/shard3/{data,log}

设置配置文件/etc/mongod/config.conf

注:mongodb3.4 版本以后需要对 config server 创建副本集

[root@am-01:~#] mkdir /etc/mongod/

[root@am-01:~#] vim /etc/mongod/config.conf

pidfilepath = /var/run/mongodb/configsrv.pid

dbpath = /data/mongodb/config/data

logpath = /data/mongodb/config/log/congigsrv.log

logappend = true

bind_ip = 172.17.1.240

port = 21000

fork = true

configsvr = true

#这个表示这是一个 config server

replSet=configs

#副本集名称

maxConns=20000

#设置最大连接数
[root@am-02:~#] mkdir /etc/mongod/

[root@am-02:~#] vim /etc/mongod/config.conf

pidfilepath = /var/run/mongodb/configsrv.pid

dbpath = /data/mongodb/config/data

logpath = /data/mongodb/config/log/congigsrv.log

logappend = true

bind_ip = 172.17.1.242

port = 21000

fork = true

configsvr = true

replSet=configs

maxConns=20000
[root@am-03:~#] mkdir /etc/mongod/

[root@am-03:~#] vim /etc/mongod/config.conf

pidfilepath = /var/run/mongodb/configsrv.pid

dbpath = /data/mongodb/config/data

logpath = /data/mongodb/config/log/congigsrv.log

logappend = true

bind_ip = 172.17.1.243

port = 21000

fork = true

configsvr = true

replSet=configs

maxConns=20000

启动 3 台机器的 config server

[root@am-01:~#] mongod -f /etc/mongod/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 87322

child process started successfully, parent exiting

[root@am-01:~#] ps aux | grep mongo

root      87322 14.1  4.6 1076212 46192 ?       Sl   19:38   0:01 mongod -f /etc/mongod/config.conf

[root@am-01:~#] netstat -lntp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 172.17.1.240:21000      0.0.0.0:*               LISTEN      87322/mongod     
[root@am-02:~#] mongod -f /etc/mongod/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 12788

child process started successfully, parent exiting

[root@am-02:~#] ps aux | grep mongo

root      12788 25.0  5.6 1078404 57100 ?       Sl   19:40   0:02 mongod -f /etc/mongod/config.conf

[root@am-02:~#] netstat -lntp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 172.17.1.242:21000      0.0.0.0:*               LISTEN      12788/mongod   
[root@am-03:~#] mongod -f /etc/mongod/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 12099

child process started successfully, parent exiting

[root@am-03:~#] ps aux | grep mongo

root      12099 22.0  4.5 1078396 46064 ?       Sl   19:41   0:01 mongod -f /etc/mongod/config.conf

[root@am-03:~#] netstat -lntp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 172.17.1.243:21000      0.0.0.0:*               LISTEN      12099/mongod       

登录任意 1 台机器的 21000 端口,初始化副本集(这里注意 IP 地址,要换成你的!)

[root@am-01:~#] mongo --host 172.17.1.240 --port 21000

> config = { _id: "configs", members: [{_id : 0, host : "172.17.1.240:21000"},{_id : 1, host : "172.17.1.242:21000"},{_id : 2, host : "172.17.1.243:21000"}]}

{

       "_id" : "configs",

       "members" : [

              {

                     "_id" : 0,

                     "host" : "172.17.1.240:21000"

              },

              {

                     "_id" : 1,

                     "host" : "172.17.1.242:21000"

              },

              {

                     "_id" : 2,

                     "host" : "172.17.1.243:21000"

              }

       ]

}

> rs.initiate(config)

{

       "ok" : 1,

       "operationTime" : Timestamp(1526298372, 1),

       "$gleStats" : {

              "lastOpTime" : Timestamp(1526298372, 1),

              "electionId" : ObjectId("000000000000000000000000")

       },

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526298372, 1),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       }

}

configs:PRIMARY> rs.status()

{

       "set" : "configs",

       "date" : ISODate("2018-05-14T11:46:57.454Z"),

       "myState" : 1,

       "term" : NumberLong(1),

       "configsvr" : true,

       "heartbeatIntervalMillis" : NumberLong(2000),

       "optimes" : {

              "lastCommittedOpTime" : {

                     "ts" : Timestamp(1526298407, 1),

                     "t" : NumberLong(1)

              },

              "readConcernMajorityOpTime" : {

                     "ts" : Timestamp(1526298407, 1),

                     "t" : NumberLong(1)

              },

              "appliedOpTime" : {

                     "ts" : Timestamp(1526298407, 1),

                     "t" : NumberLong(1)

              },

              "durableOpTime" : {

                     "ts" : Timestamp(1526298407, 1),

                     "t" : NumberLong(1)

              }

       },

       "members" : [

              {

                     "_id" : 0,

                     "name" : "172.17.1.240:21000",

                     "health" : 1,

                     "state" : 1,

                     "stateStr" : "PRIMARY",

                     "uptime" : 492,

                     "optime" : {

                            "ts" : Timestamp(1526298407, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T11:46:47Z"),

                     "infoMessage" : "could not find member to sync from",

                     "electionTime" : Timestamp(1526298383, 1),

                     "electionDate" : ISODate("2018-05-14T11:46:23Z"),

                     "configVersion" : 1,

                     "self" : true

              },

              {

                     "_id" : 1,

                     "name" : "172.17.1.242:21000",

                     "health" : 1,

                     "state" : 2,

                     "stateStr" : "SECONDARY",

                     "uptime" : 44,

                     "optime" : {

                            "ts" : Timestamp(1526298407, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDurable" : {

                            "ts" : Timestamp(1526298407, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T11:46:47Z"),

                     "optimeDurableDate" : ISODate("2018-05-14T11:46:47Z"),

                     "lastHeartbeat" : ISODate("2018-05-14T11:46:57.331Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T11:46:56.250Z"),

                     "pingMs" : NumberLong(0),

                     "syncingTo" : "172.17.1.240:21000",

                     "configVersion" : 1

              },

              {

                     "_id" : 2,

                     "name" : "172.17.1.243:21000",

                     "health" : 1,

                     "state" : 2,

                     "stateStr" : "SECONDARY",

                     "uptime" : 44,

                     "optime" : {

                            "ts" : Timestamp(1526298407, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDurable" : {

                            "ts" : Timestamp(1526298407, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T11:46:47Z"),

                     "optimeDurableDate" : ISODate("2018-05-14T11:46:47Z"),

                     "lastHeartbeat" : ISODate("2018-05-14T11:46:57.331Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T11:46:56.241Z"),

                     "pingMs" : NumberLong(0),

                     "syncingTo" : "172.17.1.240:21000",

                     "configVersion" : 1

              }

       ],

       "ok" : 1,

       "operationTime" : Timestamp(1526298407, 1),

       "$gleStats" : {

              "lastOpTime" : Timestamp(1526298372, 1),

              "electionId" : ObjectId("7fffffff0000000000000001")

       },

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526298407, 1),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       }

}

#可以见到,config server 的副本集已经配置完成

分片配置(3 台机器均需要做),我这里做了一台后使用 scp 传输

[root@am-01:~#] vim /etc/mongod/shard1.conf

pidfilepath = /var/run/mongodb/shard1.pid

dbpath = /data/mongodb/shard1/data

logpath = /data/mongodb/shard1/log/shard1.log

logappend = true

bind_ip = 0.0.0.0

port = 27001

fork = true

replSet=shard1

#副本集名称

shardsvr = true

maxConns=20000

#设置最大连接数

[root@am-01:~#] vim /etc/mongod/shard2.conf

pidfilepath = /var/run/mongodb/shard2.pid

dbpath = /data/mongodb/shard2/data

logpath = /data/mongodb/shard2/log/shard2.log

logappend = true

bind_ip = 0.0.0.0

port = 27002

fork = true

replSet=shard2

shardsvr = true

maxConns=20000

[root@am-01:~#] vim /etc/mongod/shard3.conf

pidfilepath = /var/run/mongodb/shard3.pid

dbpath = /data/mongodb/shard3/data

logpath = /data/mongodb/shard3/log/shard3.log

logappend = true

bind_ip = 0.0.0.0

port = 27003

fork = true

replSet=shard3

shardsvr = true

maxConns=20000

[root@am-01:~#] cd /etc/mongod/

[root@am-01:/etc/mongod#] scp shard1.conf shard2.conf shard3.conf 172.17.1.242:/etc/mongod/

root@172.17.1.242's password:

shard1.conf                                                                                100%  269     0.3KB/s   00:00   

shard2.conf                                                                                100%  231     0.2KB/s   00:00   

shard3.conf                                                                                100%  232     0.2KB/s   00:00  

[root@am-01:/etc/mongod#] scp shard1.conf shard2.conf shard3.conf 172.17.1.243:/etc/mongod/

root@172.17.1.243's password:

shard1.conf                                                                                100%  269     0.3KB/s   00:00   

shard2.conf                                                                                100%  231     0.2KB/s   00:00   

shard3.conf                                                                                100%  232     0.2KB/s   00:00   .
[root@am-02:~#] cd /etc/mongod/

[root@am-02:/etc/mongod#] ls

config.conf  shard1.conf  shard2.conf  shard3.conf
[root@am-03:~#] cd /etc/mongod/

[root@am-03:/etc/mongod#] ls

config.conf  shard1.conf  shard2.conf  shard3.conf

启动 shard1 并做副本集的配置

[root@am-01:~#] mongod -f /etc/mongod/shard1.conf

about to fork child process, waiting until server is ready for connections.

forked process: 88344

child process started successfully, parent exiting
[root@am-02:~#] mongod -f /etc/mongod/shard1.conf

about to fork child process, waiting until server is ready for connections.

forked process: 13558

child process started successfully, parent exiting
[root@am-03:~#] mongod -f /etc/mongod/shard1.conf

about to fork child process, waiting until server is ready for connections.

forked process: 12813

child process started successfully, parent exiting
[root@am-01:~#] mongo --port 27001

#登录 240 或者 242 任何一台机器的 27001 端口初始化副本集,243 之所以不行,是因为 shard1 我们把 243 这台机器的 27001 端口作为了仲裁节点

> use admin

switched to db admin

> config = { _id: "shard1", members: [{_id : 0, host : "172.17.1.240:27001"}, {_id: 1,host : "172.17.1.242:27001"},{_id : 2, host : "172.17.1.243:27001",arbiterOnly:true}]}

{

       "_id" : "shard1",

       "members" : [

              {

                     "_id" : 0,

                     "host" : "172.17.1.240:27001"

              },

              {

                     "_id" : 1,

                     "host" : "172.17.1.242:27001"

              },

              {

                     "_id" : 2,

                     "host" : "172.17.1.243:27001",

                     "arbiterOnly" : true

              }

       ]

}

> rs.initiate(config)

{ "ok" : 1 }

shard1:PRIMARY> rs.status()

{

       "set" : "shard1",

       "date" : ISODate("2018-05-14T15:57:16.047Z"),

       "myState" : 1,

       "term" : NumberLong(1),

       "heartbeatIntervalMillis" : NumberLong(2000),

       "optimes" : {

              "lastCommittedOpTime" : {

                     "ts" : Timestamp(1526313435, 1),

                     "t" : NumberLong(1)

              },

              "readConcernMajorityOpTime" : {

                     "ts" : Timestamp(1526313435, 1),

                     "t" : NumberLong(1)

              },

              "appliedOpTime" : {

                     "ts" : Timestamp(1526313435, 1),

                     "t" : NumberLong(1)

              },

              "durableOpTime" : {

                     "ts" : Timestamp(1526313435, 1),

                     "t" : NumberLong(1)

              }

       },

       "members" : [

              {

                     "_id" : 0,

                     "name" : "172.17.1.240:27001",

                     "health" : 1,

                     "state" : 1,

                     "stateStr" : "PRIMARY",

                     "uptime" : 2310,

                     "optime" : {

                            "ts" : Timestamp(1526313435, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T15:57:15Z"),

                     "electionTime" : Timestamp(1526313274, 1),

                     "electionDate" : ISODate("2018-05-14T15:54:34Z"),

                     "configVersion" : 1,

                     "self" : true

              },

              {

                     "_id" : 1,

                     "name" : "172.17.1.242:27001",

                     "health" : 1,

                     "state" : 2,

                     "stateStr" : "SECONDARY",

                     "uptime" : 173,

                     "optime" : {

                            "ts" : Timestamp(1526313425, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDurable" : {

                            "ts" : Timestamp(1526313425, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T15:57:05Z"),

                     "optimeDurableDate" : ISODate("2018-05-14T15:57:05Z"),

                     "lastHeartbeat" : ISODate("2018-05-14T15:57:14.225Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T15:57:14.662Z"),

                     "pingMs" : NumberLong(0),

                     "syncingTo" : "172.17.1.240:27001",

                     "configVersion" : 1

              },

              {

                     "_id" : 2,

                     "name" : "172.17.1.243:27001",

                     "health" : 1,

                     "state" : 7,

                     "stateStr" : "ARBITER",

                     "uptime" : 173,

                     "lastHeartbeat" : ISODate("2018-05-14T15:57:14.225Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T15:57:14.536Z"),

                     "pingMs" : NumberLong(0),

                     "configVersion" : 1

              }

       ],

       "ok" : 1

}

启动 shard2 并做副本集的配置

[root@am-01:~#] mongod -f /etc/mongod/shard2.conf

about to fork child process, waiting until server is ready for connections.

forked process: 88672

child process started successfully, parent exiting
[root@am-02:~#] mongod -f /etc/mongod/shard2.conf

about to fork child process, waiting until server is ready for connections.

forked process: 13365

child process started successfully, parent exiting
[root@am-03:~#] mongod -f /etc/mongod/shard2.conf

about to fork child process, waiting until server is ready for connections.

forked process: 12853

child process started successfully, parent exiting
[root@am-02:~#] mongo --port 27002

#登录 242 或者 243 任何一台机器的 27002 端口初始化副本集,240 之所以不行,是因为 shard2 我们把 240 这台机器的 27002 端口作为了仲裁节点

> use admin

switched to db admin

> config = { _id: "shard2", members: [{_id : 0, host : "172.17.1.240:27002" ,arbiterOnly:true},{_id : 1, host : "172.17.1.242:27002"},{_id : 2, host : "172.17.1.243:27002"}]}

{

       "_id" : "shard2",

       "members" : [

              {

                     "_id" : 0,

                     "host" : "172.17.1.240:27002",

                     "arbiterOnly" : true

              },

              {

                     "_id" : 1,

                     "host" : "172.17.1.242:27002"

              },

              {

                     "_id" : 2,

                     "host" : "172.17.1.243:27002"

              }

       ]

}

> rs.initiate(config)

{ "ok" : 1 }

shard2:PRIMARY> rs.status()

{

       "set" : "shard2",

       "date" : ISODate("2018-05-14T16:24:23.877Z"),

       "myState" : 1,

       "term" : NumberLong(1),

       "heartbeatIntervalMillis" : NumberLong(2000),

       "optimes" : {

              "lastCommittedOpTime" : {

                     "ts" : Timestamp(1526315054, 1),

                     "t" : NumberLong(1)

              },

              "readConcernMajorityOpTime" : {

                     "ts" : Timestamp(1526315054, 1),

                     "t" : NumberLong(1)

              },

              "appliedOpTime" : {

                     "ts" : Timestamp(1526315054, 1),

                     "t" : NumberLong(1)

              },

              "durableOpTime" : {

                     "ts" : Timestamp(1526315054, 1),

                     "t" : NumberLong(1)

              }

       },

       "members" : [

              {

                     "_id" : 0,

                     "name" : "172.17.1.240:27002",

                     "health" : 1,

                     "state" : 7,

                     "stateStr" : "ARBITER",

                     "uptime" : 41,

                     "lastHeartbeat" : ISODate("2018-05-14T16:24:23.150Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T16:24:22.633Z"),

                     "pingMs" : NumberLong(0),

                     "configVersion" : 1

              },

              {

                     "_id" : 1,

                     "name" : "172.17.1.242:27002",

                     "health" : 1,

                     "state" : 1,

                     "stateStr" : "PRIMARY",

                     "uptime" : 290,

                     "optime" : {

                            "ts" : Timestamp(1526315054, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T16:24:14Z"),

                     "infoMessage" : "could not find member to sync from",

                     "electionTime" : Timestamp(1526315033, 1),

                     "electionDate" : ISODate("2018-05-14T16:23:53Z"),

                     "configVersion" : 1,

                     "self" : true

              },

              {

                     "_id" : 2,

                     "name" : "172.17.1.243:27002",

                     "health" : 1,

                     "state" : 2,

                     "stateStr" : "SECONDARY",

                     "uptime" : 41,

                     "optime" : {

                            "ts" : Timestamp(1526315054, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDurable" : {

                            "ts" : Timestamp(1526315054, 1),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T16:24:14Z"),

                     "optimeDurableDate" : ISODate("2018-05-14T16:24:14Z"),

                     "lastHeartbeat" : ISODate("2018-05-14T16:24:23.149Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T16:24:23.306Z"),

                     "pingMs" : NumberLong(0),

                     "syncingTo" : "172.17.1.242:27002",

                     "configVersion" : 1

              }

       ],

       "ok" : 1

}

启动 shard3 并做副本集的配置

[root@am-01:~#] mongod -f /etc/mongod/shard3.conf

about to fork child process, waiting until server is ready for connections.

forked process: 88845

child process started successfully, parent exiting
[root@am-02:~#] mongod -f /etc/mongod/shard3.conf

about to fork child process, waiting until server is ready for connections.

forked process: 14503

child process started successfully, parent exiting
[root@am-03:~#] mongod -f /etc/mongod/shard3.conf

about to fork child process, waiting until server is ready for connections.

forked process: 12652

child process started successfully, parent exiting
[root@am-03:~#] mongo --port 27003

#登录 240 或者 243 任何一台机器的 27003 端口初始化副本集,242 之所以不行,是因为 shard3 我们把 242 这台机器的 27003 端口作为了仲裁节

> use admin

switched to db admin

> config = { _id: "shard3", members: [ {_id : 0, host : "172.17.1.240:27003"},  {_id : 1, host : "172.17.1.242:27003", arbiterOnly:true}, {_id : 2, host : "172.17.1.243:27003"}] }

{

       "_id" : "shard3",

       "members" : [

              {

                     "_id" : 0,

                     "host" : "172.17.1.240:27003"

              },

              {

                     "_id" : 1,

                     "host" : "172.17.1.242:27003",

                     "arbiterOnly" : true

              },

              {

                     "_id" : 2,

                     "host" : "172.17.1.243:27003"

              }

       ]

}

> rs.initiate(config)

{ "ok" : 1 }

shard3:PRIMARY> rs.status()

{

       "set" : "shard3",

       "date" : ISODate("2018-05-14T16:31:00.846Z"),

       "myState" : 1,

       "term" : NumberLong(1),

       "heartbeatIntervalMillis" : NumberLong(2000),

       "optimes" : {

              "lastCommittedOpTime" : {

                     "ts" : Timestamp(1526315453, 2),

                     "t" : NumberLong(1)

              },

              "readConcernMajorityOpTime" : {

                     "ts" : Timestamp(1526315453, 2),

                     "t" : NumberLong(1)

              },

              "appliedOpTime" : {

                     "ts" : Timestamp(1526315453, 2),

                     "t" : NumberLong(1)

              },

              "durableOpTime" : {

                     "ts" : Timestamp(1526315453, 2),

                     "t" : NumberLong(1)

              }

       },

       "members" : [

              {

                     "_id" : 0,

                     "name" : "172.17.1.240:27003",

                     "health" : 1,

                     "state" : 2,

                     "stateStr" : "SECONDARY",

                     "uptime" : 19,

                     "optime" : {

                            "ts" : Timestamp(1526315453, 2),

                            "t" : NumberLong(1)

                     },

                     "optimeDurable" : {

                            "ts" : Timestamp(1526315453, 2),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T16:30:53Z"),

                     "optimeDurableDate" : ISODate("2018-05-14T16:30:53Z"),

                     "lastHeartbeat" : ISODate("2018-05-14T16:31:00.122Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T16:31:00.524Z"),

                     "pingMs" : NumberLong(0),

                     "syncingTo" : "172.17.1.243:27003",

                     "configVersion" : 1

              },

              {

                     "_id" : 1,

                     "name" : "172.17.1.242:27003",

                     "health" : 1,

                     "state" : 7,

                     "stateStr" : "ARBITER",

                     "uptime" : 19,

                     "lastHeartbeat" : ISODate("2018-05-14T16:31:00.122Z"),

                     "lastHeartbeatRecv" : ISODate("2018-05-14T16:31:00.685Z"),

                     "pingMs" : NumberLong(0),

                     "configVersion" : 1

              },

              {

                     "_id" : 2,

                     "name" : "172.17.1.243:27003",

                     "health" : 1,

                     "state" : 1,

                     "stateStr" : "PRIMARY",

                     "uptime" : 133,

                     "optime" : {

                            "ts" : Timestamp(1526315453, 2),

                            "t" : NumberLong(1)

                     },

                     "optimeDate" : ISODate("2018-05-14T16:30:53Z"),

                     "infoMessage" : "could not find member to sync from",

                     "electionTime" : Timestamp(1526315452, 1),

                     "electionDate" : ISODate("2018-05-14T16:30:52Z"),

                     "configVersion" : 1,

                     "self" : true

              }

       ],

       "ok" : 1

}

配置路由服务器并启动,3 台机器上添加配置文件

[root@am-01:~#] vim /etc/mongod/mongos.conf

pidfilepath = /var/run/mongodb/mongos.pid

logpath = /data/mongodb/mongos/log/mongos.log

logappend = true

bind_ip = 0.0.0.0

port = 20000

fork = true

configdb = configs/172.17.1.240:21000,172.17.1.242:21000,172.17.1.243:21000

#监听的配置服务器,只能有 1 个或者 3 个,configs 为配置服务器的副本集名字

maxConns=20000

#设置最大连接数

[root@am-01:~#] scp /etc/mongod/mongos.conf 172.17.1.242:/etc/mongod/

root@172.17.1.242's password:

mongos.conf                                                                                100%  360     0.4KB/s   00:00   

[root@am-01:~#] scp /etc/mongod/mongos.conf 172.17.1.243:/etc/mongod/

root@172.17.1.243's password:

mongos.conf                                                                                100%  360     0.4KB/s   00:00

[root@am-01:~#] mongos -f /etc/mongod/mongos.conf

about to fork child process, waiting until server is ready for connections.

forked process: 89001

child process started successfully, parent exiting
[root@am-02:~#] mongos -f /etc/mongod/mongos.conf

about to fork child process, waiting until server is ready for connections.

forked process: 14582

child process started successfully, parent exiting
[root@am-03:~#] mongos -f /etc/mongod/mongos.conf

about to fork child process, waiting until server is ready for connections.

forked process: 13449

child process started successfully, parent exiting

启用分片并把所有分片和路由器串联

[root@am-01:~#] mongo --port 20000

#登陆任意一台机器的 20000 端口

mongos> sh.addShard("shard1/172.17.1.240:27001,172.17.1.242:27001,172.17.1.243:27001")

#串联命令

{

       "shardAdded" : "shard1",

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526316611, 6),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526316611, 6)

}

mongos> sh.addShard("shard2/172.17.1.240:27002,172.17.1.242:27002,172.17.1.243:27002")

{

       "shardAdded" : "shard2",

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526316628, 5),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526316628, 5)

}

mongos> sh.addShard("shard3/172.17.1.240:27003,172.17.1.242:27003,172.17.1.243:27003")

{

       "shardAdded" : "shard3",

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526316637, 2),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526316637, 2)

}

mongos> sh.status()

#查看集群状态

--- Sharding Status ---

  sharding version: {

      "_id" : 1,

      "minCompatibleVersion" : 5,

      "currentVersion" : 6,

      "clusterId" : ObjectId("5af9771136ae86eb47ab3c27")

  }

  shards:

        {  "_id" : "shard1",  "host" : "shard1/172.17.1.240:27001,172.17.1.242:27001",  "state" : 1 }

        {  "_id" : "shard2",  "host" : "shard2/172.17.1.242:27002,172.17.1.243:27002",  "state" : 1 }

        {  "_id" : "shard3",  "host" : "shard3/172.17.1.240:27003,172.17.1.243:27003",  "state" : 1 }

  active mongoses:

        "3.6.4" : 3

  autosplit:

        Currently enabled: yes

  balancer:

        Currently enabled:  yes

        Currently running:  no

        Failed balancer rounds in last 5 attempts:  0

        Migration Results for the last 24 hours:

                No recent migrations

  databases:

        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

mongodb 分片测试

登录任何一台 20000 端口

[root@am-01:~#] mongo --port 20000

mongos> use admin

switched to db admin

mongos> sh.enableSharding("testdb")

#指定要分片的数据库,不存在则创建,也可以使用"db.runCommand({enablesharding:"testdb"})"

{

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526316850, 9),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526316850, 9)

}

mongos> sh.shardCollection("testdb.table1",{"id":1})

#指定数据库里需要分片的集合和片键,也可以使用"db.runCommand({shardcollection:"testdb.table1",key:{id:1}})"

{

       "collectionsharded" : "testdb.table1",

       "collectionUUID" : UUID("47e1ec7c-1570-4556-85a4-30b414c1586c"),

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526316932, 6),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526316932, 6)

}

mongos> use  testdb

switched to db testdb

#切换库

mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:i,"test1":"testval1"})

WriteResult({ "nInserted" : 1 })

#插入测试数据

mongos> sh.enableSharding("am")

mongos> sh.shardCollection("am.a",{"id":1})

mongos> sh.status()

#这里可以查询到,库会被随机分配到 shard 里面去

  databases:

        {  "_id" : "am",  "primary" : "shard1",  "partitioned" : true }

                am.a

                        shard key: { "id" : 1 }

                        unique: false

                        balancing: true

                        chunks:

                                shard1   1

                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)

                testdb.table1

                        shard key: { "id" : 1 }

                        unique: false

                        balancing: true

                        chunks:

                                shard2   1

                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)

mongodb 备份恢复

备份指定的库

[root@am-01:~#] mongodump --host 127.0.0.1 --port 20000 -d testdb -o /tmp/mongobak

[root@am-01:~#] ls /tmp/mongobak/testdb/

a.bson  am1.bson  am1.metadata.json  am2.bson  am2.metadata.json  a.metadata.json  table1.bson  table1.metadata.json

#bson 存储的是真正的数据,json 能看到库的相关信息(每个集合都会有一个 bson 和 json)

备份所有的库

[root@am-01:~#] mongodump --host 127.0.0.1 --port 20000 -o /tmp/mongobak

[root@am-01:~#] ls /tmp/mongobak/

admin  am  config  testdb

备份指定的集合

[root@am-01:~#] mongodump --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/mongobak1/

2018-05-15T01:29:53.356+0800    writing testdb.table1 to

2018-05-15T01:29:53.484+0800    done dumping testdb.table1 (10000 documents)

[root@am-01:~#] ls /tmp/mongobak1/testdb/

table1.bson  table1.metadata.json

导出集合为 json 文件

[root@am-01:~#] mongoexport --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/mongobak1/1.json

2018-05-15T01:32:35.310+0800    connected to: 127.0.0.1:20000

2018-05-15T01:32:35.915+0800    exported 10000 records

[root@am-01:~#] ls /tmp/mongobak1/

1.json  testdb

把所有库都删除

[root@am-01:~#] mongo --port 20000

mongos> use testdb

switched to db testdb

mongos> db.dropDatabase()

{

       "dropped" : "testdb",

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526319346, 11),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526319346, 11)

}

mongos> show databases

admin   0.000GB

am      0.000GB

config  0.001GB

mongos> use am

switched to db am

mongos> db.dropDatabase()

{

       "dropped" : "am",

       "ok" : 1,

       "$clusterTime" : {

              "clusterTime" : Timestamp(1526319428, 19),

              "signature" : {

                     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

                     "keyId" : NumberLong(0)

              }

       },

       "operationTime" : Timestamp(1526319428, 19)

}

mongos> exit

bye

恢复所有库

[root@am-01:~#] rm -rf /tmp/mongobak/config

[root@am-01:~#] rm -rf /tmp/mongobak/admin

#因为 config 和 admin 库都比较重要,不需要做恢复操作

[root@am-01:~#] mongorestore -h 127.0.0.1 --port 20000 --drop /tmp/mongobak

#端口后面指定库的目录名字,其中--drop 可选,意思是当恢复之前先把之前的数据删除,不建议使用

[root@am-01:~#] mongo --port 20000

mongos> show databases

admin   0.000GB

am      0.000GB

config  0.001GB

testdb  0.000GB

恢复指定库

[root@am-01:~#] mongorestore -h 127.0.0.1 --port 20000 -d testdb --drop /tmp/mongobak/testdb

#-d 跟要恢复的库名字,端口后面指定该库备份时所在的目录

恢复集合

[root@am-01:~#] mongorestore -h 127.0.0.1 --port 20000 -d testdb -c table1 /tmp/mongobak/testdb/table1.bson

#-c 后面跟要恢复的集合名字,后面指定备份 mydb 库时生成文件所在路径,这里是一个 bson 文件的路径

导入集合

[root@am-01:~#] mongorestore -h 127.0.0.1 --port 20000 -d testdb -c table1 --file /tmp/mongobak1/1.json