首页 文章 精选 留言 我的

精选列表

搜索[集群],共10000篇文章
优秀的个人博客,低调大师

记录一次docker集群中搭建mongodb副本集

1.创建三个装有mongo的docker容器,这里使用docker-compose,配置如下 mongo: image: mongo command: mongod -f /etc/mongo.conf volumes: - ${DATA_PATH_HOST}/mongo:/data/db - ${CONF_PATH}/mongo/mongo_yaml.conf:/etc/mongo.conf - ${CONF_PATH}/mongo/access.key:/etc/access.key expose: - 27017 ports: - 27017:27017 networks: - backend mongo2: image: mongo command: mongod -f /etc/mongo.conf volumes: - ${DATA_PATH_HOST}/mongo2:/data/db - ${CONF_PATH}/mongo/mongo_yaml.conf:/etc/mongo.conf - ${CONF_PATH}/mongo/access.key:/etc/access.key expose: - 27017 ports: - 27018:27017 networks: - backend mongo3: image: mongo command: mongod -f /etc/mongo.conf volumes: - ${DATA_PATH_HOST}/mongo3:/data/db - ${CONF_PATH}/mongo/mongo_yaml.conf:/etc/mongo.conf - ${CONF_PATH}/mongo/access.key:/etc/access.key expose: - 27017 ports: - 27019:27017 networks: - backend 其中mongo.conf 为yaml格式的mongodb配置文件,内容如下 processManagement: fork: false net: bindIp: 127.0.0.1 port: 27017 storage: dbPath: /data/db systemLog: #destination: file #path: log/mongo27017.log logAppend: true storage: journal: enabled: true replication: oplogSizeMB: 500 replSetName: "r1" secondaryIndexPrefetch: "all" 执行docker-compose up -d mongo mongo2 mongo3 创建三个mongo容器 并指定副本集 r1 2. 登入任意一台机器的MongoDB执行:因为是全新的副本集所以可以任意进入一台执行;要是有一台有数据,则需要在有数据上执行;要多台有数据则不能初始化。我个人是mongo中有数据但是mongo2和mong3是空的数据库,所以我登录mongo1进行副本集初始化。 执行命令 docker-compose exec mongo bash 进入容器 执行命令 mongo 在容器内部连接mongo 执行一下命令初始化副本集 > use admin switched to db admin > config = { "_id": "r1", "members": [{ "_id": 0, "host": "mongo:27017", "priority": 1 }, { "_id": 1, "host": "mongo2:27017", "priority": 1 }, { "_id": 2, "host": "mongo3:27017", "priority": 1 }] } { "_id" : "r1", "members" : [ { "_id" : 0, "host" : "mongo:27017", "priority" : 1 }, { "_id" : 1, "host" : "mongo2:27017", "priority" : 1 }, { "_id" : 2, "host" : "mongo3:27017", "priority" : 1 } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1539830924, 1), "$clusterTime" : { "clusterTime" : Timestamp(1539830924, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } 副本集初始化完成,可以使用命令rs.status()查看当前副本集状态,至此mongodb副本集设置完成 3.加入鉴权机制,如果服务端需要开启auth认证,则在启动时通过keyFile三个节点之间的通信授权 使用命令生成keyFile文件 openssl rand -base64 745 > /docker/conf/mongo/mongo-keyfile ch 如果服务器启动时加入了参数--keyFile = /docker/conf/mongo/mongo-keyfile 则mongo服务端启动时会自动开启auth,故应先创建账号。 创建了账号 root pass auth库为admin (步骤省略) 停止所有节点,重新启动mongo服务,并加上 --keyFile参数 发现报错 mongo3_1 | 2018-10-24T06:13:06.323+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' mongo3_1 | 2018-10-24T06:13:06.331+0000 I ACCESS [main] permissions on /etc/access.key are too open mongo2_1 | 2018-10-24T06:13:06.591+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' mongo2_1 | 2018-10-24T06:13:06.605+0000 I ACCESS [main] permissions on /etc/access.key are too open mongo_1 | 2018-10-24T06:13:06.609+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' mongo_1 | 2018-10-24T06:13:06.614+0000 I ACCESS [main] permissions on /etc/access.key are too open 这是因为keyfile权限问题,执行命令将keyfile权限设置为600 chmod 600 /docker/conf/mongo/access.key 再次启动,成功。 进入某一容器执行副本集链接操作,系统提示已成功连接到副本集

优秀的个人博客,低调大师

ubuntu安装spark2.1 hadoop2.7.3集群

0: 设置系统登录相关 Master要执行 1 cat $HOME/. ssh /id_rsa .pub>>$HOME/. ssh /authorized_keys 如果用root用户 1 sed -ri 's/^(PermitRootLogin).*$/\1yes/' /etc/ssh/sshd_config 编辑/etc/hosts 1 2 3 4 5 6 7 8 9 10 11 127.0.0.1localhost #别把spark1放在这 192.168.100.25spark1 #spark1isMaster 192.168.100.26spark2 192.168.100.27spark3 127.0.1.1ubuntu #ThefollowinglinesaredesirableforIPv6capablehosts ::1localhostip6-localhostip6-loopback ff02::1ip6-allnodes ff02::2ip6-allrouters 如果把 spark1 放在/etc/hosts第一行, 会发现在slave 有下面的错误 1 org.apache.hadoop.ipc.Client:Retryingconnecttoserver:spark1 /192 .168.100.25:9000.Alreadytried0 time (s) 然后在spark1 运行 1 2 ss-lnt LISTEN0128localhost:9000 会发现监听的是本地. 删除 hosts中的相关文本重新启动hadoop,解决问题 1: 安装java 可以直接apt-get 1 2 3 4 apt-get install python-software-properties-y add-apt-repositoryppa:webupd8team /java apt-getupdate apt-get install oracle-java7-installer 或者下载 1 2 3 4 5 6 7 8 9 10 11 12 13 wgethttp: //download .oracle.com /otn-pub/java/jdk/7u80-b15/jdk-7u80-linux-x64 . tar .gz mkdir /usr/lib/jvm tar xvfjdk-7u80-linux-x64. tar .gz mv jdk1.7.0_80 /usr/lib/jvm #配置相关路径 update-alternatives-- install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0_80/bin/java" 1 update-alternatives-- install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0_80/bin/javac" 1 update-alternatives-- install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0_80/bin/javaws" 1 update-alternatives--configjava #验证一下 java-version javac-version javaws-version 添加环境变量 1 2 3 4 5 6 cat >> /etc/profile <<EOF export JAVA_HOME= /usr/lib/jvm/jdk1 .7.0_80 export JRE_HOME= /usr/lib/jvm/jdk1 .7.0_80 /jre export CLASSPATH=.:$CLASSPATH:$JAVA_HOME /lib :$JRE_HOME /lib export PATH=$PATH:$JAVA_HOME /bin :$JRE_HOME /bin EOF 2: 安装 hadoop 1 2 3 4 tar xvfhadoop-2.7.3. tar .gz mv hadoop-2.7.3 /usr/local/hadoop cd /usr/local/hadoop mkdir -phdfs/{data,name,tmp} 添加环境变量 1 2 3 4 cat >> /etc/profile <<EOF export HADOOP_HOME= /usr/local/hadoop export PATH=$PATH:$HADOOP_HOME /bin EOF 编辑 hadoop-env.sh 文件 1 export JAVA_HOME= /usr/lib/jvm/jdk1 .7.0_80 #只改了这一行 编辑 core-site.xml 文件 1 2 3 4 5 6 7 8 9 10 <configuration> <property> <name>fs.defaultFS< /name > <value>hdfs: //spark1 :9000< /value > < /property > <property> <name>hadoop.tmp. dir < /name > <value> /usr/local/hadoop/hdfs/tmp < /value > < /property > < /configuration > 编辑hdfs-site.xml 文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 <configuration> <property> <name>dfs.namenode.name. dir < /name > <value> /usr/local/hadoop/hdfs/name < /value > < /property > <property> <name>dfs.datanode.data. dir < /name > <value> /usr/local/hadoop/hdfs/data < /value > < /property > <property> <name>dfs.replication< /name > <value>3< /value > < /property > < /configuration > 编辑mapred-site.xml 文件 1 2 3 4 5 6 <configuration> <property> <name>mapreduce.framework.name< /name > <value>yarn< /value > < /property > < /configuration > 编辑yarn-site.xml 文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <configuration> <property> <name>yarn.nodemanager.aux-services< /name > <value>mapreduce_shuffle< /value > < /property > <property> <name>yarn.resourcemanager. hostname < /name > <value>spark1< /value > < /property > <!--property> 别添加这个属性,添加了可能出现下面的错误: Problembindingto[spark1:0]java.net.BindException:Cannotassignrequestedaddress <name>yarn.nodemanager. hostname < /name > <value>spark1< /value > < /property-- > < /configuration > 上面相关文件的具体属性及值在官网查询: https://hadoop.apache.org/docs/r2.7.3/ 编辑masters 文件 1 echo spark1>masters 编辑 slaves 文件 1 2 3 spark1 spark2 spark3 安装好后,使用rsync 把相关目录及/etc/profile同步过去即可 启动hadoop dfs 1 . /sbin/start-dfs .sh 初始化文件系统 1 hadoopnamenode- format 启动 yarn 1 . /sbin/start-yarn .sh 检查spark1相关进程 1 2 3 4 5 6 7 root@spark1: /usr/local/spark/conf #jps 1699NameNode 8856Jps 2023SecondaryNameNode 2344NodeManager 1828DataNode 2212ResourceManager spark2 spark3 也要类似下面的运程 1 2 3 4 root@spark2: /tmp #jps 3238Jps 1507DataNode 1645NodeManager 可以打开web页面查看 1 http: //192 .168.100.25:50070 测试hadoop 1 2 3 4 hadoopfs- mkdir /testin hadoopfs-put~ /str .txt /testin cd /usr/local/hadoop hadoopjar. /share/hadoop/mapreduce/hadoop-mapreduce-examples-2 .7.3.jarwordcount /testin/str .txttestout 结果如下: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 hadoopjar. /share/hadoop/mapreduce/hadoop-mapreduce-examples-2 .7.3.jarwordcount /testin/str .txttestout 17 /02/24 11:20:59INFOclient.RMProxy:ConnectingtoResourceManageratspark1 /192 .168.100.25:8032 17 /02/24 11:21:01INFOinput.FileInputFormat:Totalinputpathstoprocess:1 17 /02/24 11:21:01INFOmapreduce.JobSubmitter:numberofsplits:1 17 /02/24 11:21:02INFOmapreduce.JobSubmitter:Submittingtokens for job:job_1487839487040_0002 17 /02/24 11:21:06INFOimpl.YarnClientImpl:Submittedapplicationapplication_1487839487040_0002 17 /02/24 11:21:06INFOmapreduce.Job:Theurltotrackthejob:http: //spark1 :8088 /proxy/application_1487839487040_0002/ 17 /02/24 11:21:06INFOmapreduce.Job:Runningjob:job_1487839487040_0002 17 /02/24 11:21:28INFOmapreduce.Job:Jobjob_1487839487040_0002running in ubermode: false 17 /02/24 11:21:28INFOmapreduce.Job:map0%reduce0% 17 /02/24 11:22:00INFOmapreduce.Job:map100%reduce0% 17 /02/24 11:22:15INFOmapreduce.Job:map100%reduce100% 17 /02/24 11:22:17INFOmapreduce.Job:Jobjob_1487839487040_0002completedsuccessfully 17 /02/24 11:22:17INFOmapreduce.Job:Counters:49 FileSystemCounters FILE:Numberofbytes read =212115 FILE:Numberofbyteswritten=661449 FILE:Numberof read operations=0 FILE:Numberoflarge read operations=0 FILE:Numberofwriteoperations=0 HDFS:Numberofbytes read =377966 HDFS:Numberofbyteswritten=154893 HDFS:Numberof read operations=6 HDFS:Numberoflarge read operations=0 HDFS:Numberofwriteoperations=2 JobCounters Launchedmaptasks=1 Launchedreducetasks=1 Data- local maptasks=1 Total time spentbyallmaps in occupiedslots(ms)=23275 Total time spentbyallreduces in occupiedslots(ms)=11670 Total time spentbyallmaptasks(ms)=23275 Total time spentbyallreducetasks(ms)=11670 Totalvcore-millisecondstakenbyallmaptasks=23275 Totalvcore-millisecondstakenbyallreducetasks=11670 Totalmegabyte-millisecondstakenbyallmaptasks=23833600 Totalmegabyte-millisecondstakenbyallreducetasks=11950080 Map-ReduceFramework Mapinputrecords=1635 Mapoutputrecords=63958 Mapoutputbytes=633105 Mapoutputmaterializedbytes=212115 Input split bytes=98 Combineinputrecords=63958 Combineoutputrecords=14478 Reduceinput groups =14478 Reduceshufflebytes=212115 Reduceinputrecords=14478 Reduceoutputrecords=14478 SpilledRecords=28956 ShuffledMaps=1 FailedShuffles=0 MergedMapoutputs=1 GC time elapsed(ms)=429 CPU time spent(ms)=10770 Physicalmemory(bytes)snapshot=455565312 Virtualmemory(bytes)snapshot=1391718400 Totalcommittedheapusage(bytes)=277348352 ShuffleErrors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 FileInputFormatCounters BytesRead=377868 FileOutputFormatCounters BytesWritten=154893 3: 安装 scala 1 2 tar xvfscala-2.11.8.tgz mv scala-2.11.8 /usr/local/scala 添加环境变量 1 2 3 4 cat >> /etc/profile <<EOF export SCALA_HOME= /usr/local/scala export PATH=$PATH:$SCALA_HOME /bin EOF 测试 1 2 3 source /etc/profile scala-version Scalacoderunnerversion2.11.8--Copyright2002-2016,LAMP /EPFL 4: 安装 spark 1 2 tar xvfspark-2.1.0-bin-hadoop2.7.tgz mv spark-2.1.0-bin-hadoop2.7 /usr/local/spark 添加环境变量 1 2 3 4 5 cat >> /etc/profile <<EOF export SPARK_HOME= /usr/local/spark export PATH=$PATH:$SPARK_HOME /bin export LD_LIBRARY_PATH=$HADOOP_HOME /lib/native EOF 1 2 3 export LD_LIBRARY_PATH=$HADOOP_HOME /lib/native #这一条不添加的话在运行spark-shell时会出现下面的错误 NativeCodeLoader:Unabletoloadnative-hadooplibrary for yourplatform...using builtin -javaclasseswhereapplicable 编辑 spark-env.sh 1 2 SPARK_MASTER_HOST=spark1 HADOOP_CONF_DIR= /usr/locad/hadoop/etc/hadoop 编辑 slaves 1 2 3 spark1 spark2 spark3 启动 spark 1 . /sbin/start-all .sh 此时在spark1上运行jps应该如下, 多了 Master 和 Worker 1 2 3 4 5 6 7 8 9 root@spark1: /usr/local/spark/conf #jps 1699NameNode 8856Jps 7774Master 2023SecondaryNameNode 7871Worker 2344NodeManager 1828DataNode 2212ResourceManager spark2 和 spark3 则多了 Worker 1 2 3 4 5 root@spark2: /tmp #jps 3238Jps 1507DataNode 1645NodeManager 3123Worker 可以打开web页面查看 1 http: //192 .168.100.25:8080/ 运行 spark-shell 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 root@spark1: /usr/local/spark/conf #spark-shell UsingSpark'sdefaultlog4jprofile:org /apache/spark/log4j-defaults .properties Settingdefaultloglevelto "WARN" . Toadjustlogginglevelusesc.setLogLevel(newLevel).ForSparkR,usesetLogLevel(newLevel). 17 /02/24 11:55:46WARNSparkContext:Support for Java7isdeprecatedasofSpark2.0.0 17 /02/24 11:56:17WARNObjectStore:Failedtogetdatabaseglobal_temp,returningNoSuchObjectException SparkcontextWebUIavailableathttp: //192 .168.100.25:4040 Sparkcontextavailableas 'sc' (master= local [*],app id = local -1487908553475). Sparksessionavailableas 'spark' . Welcometo ______ /__ /__ ________/ /__ _\\/_\/_`/__/'_/ /___/ .__/\_,_ /_/ /_/ \_\version2.1.0 /_/ UsingScalaversion2.11.8(JavaHotSpot(TM)64-BitServerVM,Java1.7.0_80) Type in expressionstohavethemevaluated. Type:help for more information. scala>:help 此时可以打开spark 查看 1 http: //192 .168.100.25:4040 /environment/ spark 测试 1 2 3 run-exampleorg.apache.spark.examples.SparkPi 17 /02/28 11:17:20INFODAGScheduler:Job0finished:reduceatSparkPi.scala:38,took3.491241s Piisroughly3.1373756868784346 至此完成. 本文转自 nonono11 51CTO博客,原文链接:http://blog.51cto.com/abian/1900868,如需转载请自行联系原作者

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册