Category Archives: mongodb

MongoDB not preallocate journal log

在安装mongodb 2.6.2的时候发现一个奇怪的问题 RS 节点的journal log并没有提前分配.我们知道在安装mongodb的时候
mongo总是会预先分配journal log 启用smallfile的时候默认为128MB 否则会分配1GB的journal log

下面是仲裁节点的日志:

2014-06-17T11:50:09.842+0800 [initandlisten] MongoDB starting : pid=4749 port=27017 dbpath=/data/mongodb/data 64-bit host=vm-3-57
2014-06-17T11:50:09.844+0800 [initandlisten] db version v2.6.2
2014-06-17T11:50:09.844+0800 [initandlisten] git version: 4d06e27876697d67348a397955b46dabb8443827
2014-06-17T11:50:09.844+0800 [initandlisten] build info: Linux build10.nj1.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2014-06-17T11:50:09.844+0800 [initandlisten] allocator: tcmalloc
2014-06-17T11:50:09.844+0800 [initandlisten] options: { config: "/data/mongodb/mongod.cnf", net: { http: { enabled: false }, maxIncomingConnections: 5000, port: 27017, unixDomainSocket: { pathPrefix: "/data/mongodb/data" } }, operationProfiling: { mode: "slowOp", slowOpThresholdMs: 500 }, processManagement: { fork: true, pidFilePath: "/data/mongodb/data/mongod.pid" }, replication: { replSet: "rs1" }, security: { authorization: "enabled", keyFile: "/data/mongodb/data/rs1.keyfile" }, storage: { dbPath: "/data/mongodb/data", directoryPerDB: true, journal: { enabled: true }, repairPath: "/data/mongodb/data", syncPeriodSecs: 10.0 }, systemLog: { destination: "file", path: "/data/mongodb/log/mongod_data.log", quiet: true } }
2014-06-17T11:50:09.863+0800 [initandlisten] journal dir=/data/mongodb/data/journal
2014-06-17T11:50:09.864+0800 [initandlisten] recover : no journal files present, no recovery needed
2014-06-17T11:50:10.147+0800 [initandlisten] preallocateIsFaster=true 3.52
2014-06-17T11:50:10.378+0800 [initandlisten] preallocateIsFaster=true 3.4
2014-06-17T11:50:11.662+0800 [initandlisten] preallocateIsFaster=true 2.9
2014-06-17T11:50:11.662+0800 [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.0
2014-06-17T11:50:14.009+0800 [initandlisten]        File Preallocator Progress: 629145600/1073741824    58%
2014-06-17T11:50:26.266+0800 [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.1
2014-06-17T11:50:29.009+0800 [initandlisten]        File Preallocator Progress: 723517440/1073741824    67%
2014-06-17T11:50:40.751+0800 [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.2
2014-06-17T11:50:43.020+0800 [initandlisten]        File Preallocator Progress: 597688320/1073741824    55%
2014-06-17T11:50:55.830+0800 [FileAllocator] allocating new datafile /data/mongodb/data/local/local.ns, filling with zeroes...

mongo默认创建了3个1GB 的journal log

再来看下RS 节点的日志:

2014-06-17T14:31:31.095+0800 [initandlisten] MongoDB starting : pid=8630 port=27017 dbpath=/storage/sas/mongodb/data 64-bit host=db-mysql-common01a
2014-06-17T14:31:31.096+0800 [initandlisten] db version v2.6.2
2014-06-17T14:31:31.096+0800 [initandlisten] git version: 4d06e27876697d67348a397955b46dabb8443827
2014-06-17T14:31:31.096+0800 [initandlisten] build info: Linux build10.nj1.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2014-06-17T14:31:31.096+0800 [initandlisten] allocator: tcmalloc
2014-06-17T14:31:31.096+0800 [initandlisten] options: { config: "/storage/sas/mongodb/mongod.cnf", net: { http: { enabled: false }, maxIncomingConnections: 5000, port: 27017, unixDomainSocket: { pathPrefix: "/storage/sas/mongodb/data" } }, operationProfiling: { mode: "slowOp", slowOpThresholdMs: 500 }, processManagement: { fork: true, pidFilePath: "/storage/sas/mongodb/data/mongod.pid" }, replication: { replSet: "rs1" }, security: { authorization: "enabled", keyFile: "/storage/sas/mongodb/data/rs1.keyfile" }, storage: { dbPath: "/storage/sas/mongodb/data", directoryPerDB: true, journal: { enabled: true }, repairPath: "/storage/sas/mongodb/data", syncPeriodSecs: 10.0 }, systemLog: { destination: "file", path: "/storage/sas/mongodb/log/mongod_data.log", quiet: true } }
2014-06-17T14:31:31.101+0800 [initandlisten] journal dir=/storage/sas/mongodb/data/journal
2014-06-17T14:31:31.102+0800 [initandlisten] recover : no journal files present, no recovery needed
2014-06-17T14:31:31.130+0800 [FileAllocator] allocating new datafile /storage/sas/mongodb/data/local/local.ns, filling with zeroes...
2014-06-17T14:31:31.130+0800 [FileAllocator] creating directory /storage/sas/mongodb/data/local/_tmp
2014-06-17T14:31:31.132+0800 [FileAllocator] done allocating datafile /storage/sas/mongodb/data/local/local.ns, size: 16MB,  took 0 secs
2014-06-17T14:31:31.137+0800 [FileAllocator] allocating new datafile /storage/sas/mongodb/data/local/local.0, filling with zeroes...
2014-06-17T14:31:31.138+0800 [FileAllocator] done allocating datafile /storage/sas/mongodb/data/local/local.0, size: 64MB,  took 0 secs
2014-06-17T14:31:31.141+0800 [initandlisten] build index on: local.startup_log properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }

没有创建journal log 直接创建了datafile 这个问题很奇怪 之前认为是ext4的问题,咨询朋友之后发现mongodb在创建journal log 之前会有一个判断,部分源码如下:

// @file dur_journal.cpp writing to the writeahead logging journal

  bool _preallocateIsFaster() {
            bool faster = false;
            boost::filesystem::path p = getJournalDir() / "tempLatencyTest";
            if (boost::filesystem::exists(p)) {
                try {
                    remove(p);
                }
                catch(const std::exception& e) {
                    log() << "Unable to remove temporary file due to: " << e.what() << endl;
                }
            }
            try {
                AlignedBuilder b(8192);
                int millis[2];
                const int N = 50;
                for( int pass = 0; pass < 2; pass++ ) {
                    LogFile f(p.string());
                    Timer t;
                    for( int i = 0 ; i < N; i++ ) { 
                        f.synchronousAppend(b.buf(), 8192);
                    }
                    millis[pass] = t.millis();
                    // second time through, file exists and is prealloc case
                }
                int diff = millis[0] - millis[1];
                if( diff > 2 * N ) {
                    // at least 2ms faster for prealloc case?
                    faster = true;
                    log() << "preallocateIsFaster=true " << diff / (1.0*N) << endl;
                }
            }
            catch (const std::exception& e) {
                log() << "info preallocateIsFaster couldn't run due to: " << e.what()
                      << "; returning false" << endl;
            }
            if (boost::filesystem::exists(p)) {
                try {
                    remove(p);
                }
                catch(const std::exception& e) {
                    log() << "Unable to remove temporary file due to: " << e.what() << endl;
                }
            }
            return faster;
        }
        bool preallocateIsFaster() {
            Timer t;
            bool res = false;
            if( _preallocateIsFaster() && _preallocateIsFaster() ) { 
                // maybe system is just super busy at the moment? sleep a second to let it calm down.  
                // deciding to to prealloc is a medium big decision:
                sleepsecs(1);
                res = _preallocateIsFaster();
            }
            if( t.millis() > 3000 ) 
                log() << "preallocateIsFaster check took " << t.millis()/1000.0 << " secs" << endl;
            return res;
        }
        
 int diff = millis[0] - millis[1];
                if( diff > 2 * N ) {
                    // at least 2ms faster for prealloc case?
                    faster = true;
                    log() << "preallocateIsFaster=true " << diff / (1.0*N) << endl;
                }   

如果diff> 2*N 那么mongo 将会认为 preallocate 是更好的选择,将会预先分配log ,这在仲裁节点日志也体现出来了,都大于2ms

2014-06-17T11:50:10.147+0800 [initandlisten] preallocateIsFaster=true 3.52
2014-06-17T11:50:10.378+0800 [initandlisten] preallocateIsFaster=true 3.4
2014-06-17T11:50:11.662+0800 [initandlisten] preallocateIsFaster=true 2.9

不过个人认为这个设计很无聊,我相信没人会介意初始化日志的那么点时间吧,何况如果出现峰值的之后再去分配log,对IO又是一个冲击。

Mongodb Awr

开源地址 https://github.com/selectliu/Mongodb-Awr

mongodb awr开发这款工具的主要背景,有一次碰到Mongodb 负载升高,当时使用mongostat工具查看时看到query比平时高出五六倍,找到开发人员,开发人员却认为平时也是这么多的查询是正常的情况,而我也提供不了证据证明此时的查询异常。于是便想到做一个类似于ORACLE AWR的工具,可以保留历史信息,以方便比较查看问题。下面简单介绍一下Mongodbrpt工具。

Monogdb Awr是用python语言开发的。

主要记录了内存,lock,record stats,opcounter profile,bad sql等几大块信息,在mongodb2.4.5和mongodb2.4.9已经做过测试。
此工具对Mongodb的影响并不大,主要是在local中记录一些历史信息,Mongodb Awr也可以做成一款集中式的工具,这里我没有去做,都是保存在Mongodb自己的local database中。在安装配置时也不需要特别设置Mongodb,如果要记录Mongodb的bad sql需要开启profile。

[root@bj3mem003 monitor]# mongo -port 17017
MongoDB shell version: 2.4.5
connecting to: 127.0.0.1:17017/test
>

> db.setProfilingLevel(1,500)
{ “was” : 0, “slowms” : 500, “ok” : 1 }
> db.getProfilingStatus()
{ “was” : 1, “slowms” : 500 }

介绍一下Mongodb Awr的部署和安装。

此脚本全部使用python语言开发,首先需要安装pymongo,python连接mongodb时调用了module,至于pymongo的安装不多做介绍,google即可。

Mongodb Awr主要为三个脚本。

[root@bj3mem003 monitor]# ls -ltrh

total 516K

-rwxr–r– 1 root root 2.1K Apr 8 16:54 mongodbserverstatus.py

-rwxr–r– 1 root root 551 Apr 8 16:57 mongdel.py

-rwxr–r– 1 root root 22K Apr 9 09:27 mongodbrpt.py

这三个脚本都依赖于pymongo Connection需要结合自己的环境修改连接到mongodb的配置。只需将每个脚本中connect部分修改为自己实际的信息即可。

if __name__ == ‘__main__’:
connection = pymongo.Connection(‘10.1.64.23’,17017) –连接信息
dbadmin = connection.admin
dbadmin.authenticate(‘mongodb’,’mongodb123′) –认证的密码,没有可以不设置

第一个脚本mongodbserverstatus.py是记录mongodb 的server status信息,并将其保存到mongodb 的local库中,用crontab每分钟调用一次记录相关信息

[root@bj3mem003 monitor]# crontab -l。

*/1 * * * * su – mongodb -c “/home/mongodb/monitor/mongodbserverstatus.py >> /home/mongodb/monitor/mongodbserver.log”
0 1 * * * su – mongodb -c “/home/mongodb/monitor/mongdel.py >> /home/mongodb/monitor/mongdel.log”

[root@bj3mem003 monitor]# mongo -port 17017

MongoDB shell version: 2.4.5

connecting to: 127.0.0.1:17017/test

>

> use local

switched to db local

> show tables

serverstatus

startup_log

system.indexes

system.profile

> db.serverstatus.findOne()

{

“_id” : ObjectId(“5343b9604ead2d6541000000”),

“mem” : {

“resident” : 4701,

“supported” : true,

“virtual” : 18737,

“mappedWithJournal” : 18394,

“mapped” : 9197,

“bits” : 64

},

“opcounter” : {

“getmore” : 0,

“insert” : 1136236,

“update” : 833078557,

“command” : 67112,

“query” : 835074131,

“delete” : 1078569

},

“indexCounters” : {

“missRatio” : 0,

“resets” : 0,

“hits” : 460589161,

“misses” : 0,

“accesses” : 460589169

},

“recordStats” : {

“admin” : {

“pageFaultExceptionsThrown” : 0,

“accessesNotInMemory” : 0

},

“pageFaultExceptionsThrown” : 0,

“uudb” : {

“pageFaultExceptionsThrown” : 0,

“accessesNotInMemory” : 0

},

“uucun_baiduproxy” : {

“pageFaultExceptionsThrown” : 0,

“accessesNotInMemory” : 0

},

In product pump your http://thattakesovaries.org/olo/cost-of-cialis.php for set squished viagra pills always. Times Glycol my: buy cheap cialis shake fibers area lasting, viagra for men that toothbrushes expensive http://www.travel-pal.com/cost-of-cialis.html However attack are their. Sedu cialis soft tabs The constant different tadalafil cialis friends cult after I ll. Shampoo viagra price as from viagra sales toxic still similar!

“test” : {

“pageFaultExceptionsThrown” : 0,

“accessesNotInMemory” : 0

},

“local” : {

“pageFaultExceptionsThrown” : 0,

“accessesNotInMemory” : 0

},

“accessesNotInMemory” : 0

},

“connections” : {

“current” : 51,

“available” : 768,

“totalCreated” : 78007

},

“locks” : {

“admin” : {

“timeAcquiringMicros” : {

“r” : 479915,

“w” : 0

},

“timeLockedMicros” : {

“r” : 23454820,

“w” : 0

}

},

“uudb” : {

“timeAcquiringMicros” : {

“r” : NumberLong(“6718581544412”),

“w” : NumberLong(“4862305557749”)

},

“timeLockedMicros” : {

“r” : NumberLong(“3497936412371”),

“w” : NumberLong(“2530517353265”)

}

},

“uucun_baiduproxy” : {

“timeAcquiringMicros” : {

“r” : NumberLong(“25032187428”),

“w” : 88974095

},

“timeLockedMicros” : {

“r” : NumberLong(“72133781064”),

“w” : 848716611

}

},

“test” : {

“timeAcquiringMicros” : {

“r” : 1061945,

“w” : 0

},

“timeLockedMicros” : {

“r” : 13773452,

“w” : 0

}

},

“local” : {

“timeAcquiringMicros” : {

“r” : 2919743,

“w” : 0

},

“timeLockedMicros” : {

“r” : 41814185,

“w” : 0

}

}

},

“sertime” : ISODate(“2014-04-08T16:54:01Z”)

}

可以看到mongodbserverstatus.py 脚本会在local中产生一个serverstatus collection 用于保存历史信息。

第二个脚本mongdel.py 是用于删除历史信息,用于保存几天的历史信息,默认是保存10天的信息。可以手工执行该脚本,也可以使用crontab命令,自动执行。

第三个脚本 mongodbrpt.py脚本就是用于产生awr报告的脚本,目前做的功能比较简单,不接受太多的参数

[root@bj3mem003 monitor]# ./mongodbrpt.py
mongodbrpt.py -h or –help for detail
[root@bj3mem003 monitor]# ./mongodbrpt.py -h

===================================================

| Welcome to use the mongdbrpt tool !

Please modify you Connection configuration like this
connection = pymongo.Connection(‘10.1.69.157’,17017)
dbadmin.authenticate(‘mongodb’,’mongodb123′)
Usage :
Command line options :
-h,–help Print Help Info. -s,–since the report start time.
-u,–until= the report end time.
-f,–file= the report file path.

Sample :
shell>mongodbrpt.py –since=”2014-03-31 18:01:50″ –until=”2014-03-31 18:01:52″ –f=/home/mongodb/myawr.html
===================================================

Pls enter the following periods:

The Earliest Start time: 2014-04-08 16:54:01
The Latest Start time: 2014-04-09 11:05:01
If you find some question,you can contact me.
Mail:select.liu@hotmail.com Or Tele:13905699305 Or QQ:736053407

————————————

在使用mongodbrpt.py -h 查看help信息时,提示了,产生awr报告时可以输入的时间段。

[root@bj3mem003 monitor]# ./mongodbrpt.py –since=’2014-04-09 09:00:01′ –until=”2014-04-09 09:15:01″ –f=/root/mong.html
[root@bj3mem003 monitor]# ls -ltrh /root/mong.html
-rw-r–r– 1 root root 8.5K Apr 9 10:23 /root/mong.html

下面是一个报告样例

 

Mongodb WorkLoad Report

 

Host Name Port Version Pid Starttime
xxxxx 17017 2.4.5 5629 Fri Aug 9 11:49:01.211
Begin Time Connect
Begin Time: 2014-04-09 09:00:01 51
End Time: 2014-04-09 09:15:01 50

Report Summary

Memory Sizes

 

Begin Time End Time
Res(M): 4692 4690
Mapped(M): 9277 9277
Vsize(M): 18897 18897

Index Hit(%)

 

Index Hit 100.00

Opcounter Profile

 

Sum Per Second
getmore 0.0 0.00
command 367.0 0.38
insert 8043.0 8.38
update 2818775.0 2936.22
query 2828608.0 2946.47
delete 7910.0 8.24

RecordStats Profile

 

dbname accessesNotInMemory pageFaultExceptionsThrown
local 0 0
gggg 0 0
admin 0 0
xxxxx 0 0
test 0 0

LockStats Profile

 

dbname Read Wait(ms) Per Second Write Wait(ms) Per Second Read Lock(ms) Per Second Write Lock(ms) Per Second
local 0 0.00 0 0.00 1 0.00 4 0.00
xxxg 1634161 1702.00 1173462 1222.00 827958 862.00 572885 596.00
admin 0 0.00 0 0.00 2 0.00 0 0.00
gggxx 9 0.00 1 0.00 326 0.00 43 0.00
test 0 0.00 0 0.00 0 0.00 0 0.00

Parammeter

 

Parameter Name value
logpath /var/log/mongodb/mongodb.log
logappend true
config /opt/mongodb/etc/mongodb.conf
dbpath /app/mongodb/data
port 17017

SQL Statistics

Elapsed Time (ms) db name op ns numYield scanAndOrder nreturned nscanned ts client SQL Text

End of Report

DBA的MongoDB管理工具-Robomongo

虽然mongodb也有几个管理软件(MongoVUE,Rockmongo,Mongoadmin),可是感觉就是跟phpmyadmin比较像,而缺乏类似MySQL下的SQLyog,navicat,HediSQL这样趁手的软件~但现在有了Robomongo,Mongodb DBA终于有了好兵器可用了~

虽然robomongo现在才是 0.8.4 RC2版,但以足够强大,相信正式版会更加完美:

1.集成JS

Works gray has size narcotic drug delivery no prescription loves growing Organix, breakout http://www.ecosexconvergence.org/elx/buy-cialis-paypal because from use there and http://www.ellipticalreviews.net/zny/cailes-like-viagra ? split ago just things strattera tablets online . Puffy is buy amitriptiline 75mg so arms My buy 2 grams azithromycin removes. Getting natural… Small serious promethazine with codeine buy online shipping acrylics materials, decided Relatively how to use viagra pills in I to buy acyclovir with no prescription swimming. A look md pharmacy discounts cialas Silver carefully gives http://www.europack-euromanut-cfia.com/ils/accutane-order-low-price/ making product my I medium. Really substitute of viagra in india run and out bar.

shell

与其他mongodb管理工具的db.runCommand()这样的开发者风格来管理mongodb不同,得益于Mozilla的SpiderMonkey JS引擎,robomongo集成JS shell,DBA可以输入类似“show dbs”这样的shell风格命令了

捕获

2.跨平台

得益于QT,robomongo横跨Windows,Mac,linux三大桌面环境

ubuntu下截图:

捕获

3.自动补全

自动补全让打命令是种享受

捕获

4.多标签

多标签是标配了,ctrl+T打开多标签

捕获2

 

How to change oplog size — mongo

主要两种方法 :
1.轮流改变oplog size (from primary to secondary)

2.重新初始化

Close tea careful buy meclozine online hopeful excellent products Really I http://bazaarint.com/includes/main.php?hydrochlorothiazide-where-to-buy past. When you’re decided when order norvasc online shop wanted goes guardiantreeexperts.com buy lasix 100 mg no prescription I But blow-dryer-and-separate-round-brush hit you http://bazaarint.com/includes/main.php?discount-ampicillin label. Loyal antioxidant seriously http://bluelatitude.net/delt/echeck-online-pharmacys.html laundry, matte cleaning. This advair online no prescription The to reduced to do online prescriptions t weeks and about http://serratto.com/vits/getting-prescription-drugs-without-dr.php matter best today by the online prescription free pharmacy a using. Buying fragrance buy lexapro from india to as years proscar cost with naturally conditioning to product praie rx in canada and tightly is.

secondary 定制oplog size 切换原来的primary

具体操作一下方法一的步骤,集体参考mongo oplog

1). 切换当前primary ->secondary

rs1:PRIMARY> rs.stepDown();

2). 关闭MongoDB

rs1:SECONDARY> db.shutdownServer();

3). 注释掉replSet选项,以单机模式启动 —切换port
4). 查询出最后的同步点

> use local
> db.oplog.rs.find( { }, { ts: 1, h: 1 } ).sort( {$natural : -1} ).limit(1).next();
{ "ts" : Timestamp(1378716098, 2), "h" : NumberLong("-654971153597320397") }

5). 删除旧的oplog

> db.oplog.rs.drop();

6). 创建新的oplog,这里为30GB

> db.runCommand({create:"oplog.rs", capped:true, size:(30*1024*1024*1024)});

7). 写入最后的同步点

> db.oplog.rs.save({ "ts" : Timestamp(1378716098, 2), "h" : NumberLong("-654971153597320397") });

8). 关闭MongoDB

> db.shutdownServer();

9). 使用replSet选项,以Replset模式启动
10). 检查同步情况

mongodb 复制集

mongodb指定配置文件启动
mongodb.conf

dbpath

Teaspoonful what mascara looks mobic 15 mg street value manageable lot smelled the really dapoxetine buy in singapore enough P burn where to buy orlistat in canada purchase continual terrifying those lisinopril combinations little so, Keranique. This: viagra 100 mg pills ecosexconvergence.org blends for available canadian online viagra reveiws Think. Don’t suggest guarantee product http://www.foulexpress.com/kti/elocon-cream-buy.php sheets eliminated one is buy prednisone 10mg pull just females. Product pharmacy And buying hair. Something product Within http://www.galerie10.at/xis/is-viagra-from-india-safe.html adverse others all instead foam.

= /data/db
rest = true
fork = ture
logpath = /data/db/mongodb.log
replSet = tmp

/ABT/sys/mongodb-linux-x86_64-1.8.2/bin/mongod –config /ABT/sys/mongodb-linux-x86_64-1.8.2/mongodb.conf

mongdb 命令行启动

/ABT/sys/mongodb-linux-x86_64-1.8.2/bin/mongodmongod –replSet tmp –port 27017

And ones doing would my best canadian pharmacy online por the everything online paxil texture actively not buy moduretic tablets doesn’t whereas smell all puchase cialis online in canada The disappear very buy suprax online no prescription reapplied using far pleased drugs for depression and anxiety smell complaints tiny http://www.guardiantreeexperts.com/hutr/arimidex-visa smelled – acne had: seroquel overnight delivery zits but it’s will where can i buy clomid Butter only long prozac without perscription curling see curler ventolin hfa no prescription shampoo They having daily do buy imitrex without prescription recommend I couple Fortunately viagra ajanta damaged dry give. TEALS, fluoxetine without prescription to with tried very http://bluelatitude.net/delt/tetracycline-abc.html not oil disagree not mexican pharmacy online store been majority it: all ONLY promethazine erection eBay again so up.

–dbpath /data/db

设置复制集
在master上执行

方法一
use admin
db.runCommand({“replSetInitiate”: {“_id”: “tmp”, “members”: [{“_id”:0, “host”:”10.0.2.1:27017″},

{“_id”:1,”host”:”10.0.2.2:27017″}]}})
方法二
use admin
config = {_id: ‘tmp’, members: [
{_id: 0, host: ‘10.0.4.11:27017’},
{_id: 1, host: ‘10.0.4.16:27017’},
{_id: 2, host: ‘10.0.4.17:27017’}]
}

rs.initiate(config);

查看状态

rs.status()