首页 文章 精选 留言 我的

精选列表

搜索[部署],共10003篇文章
优秀的个人博客,低调大师

CentOS7 部署docker

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 1.docker安装 CentOS7 yum install epel-release-y yum install docker-y [root@Docker~] #dockerversion Client: Version:1.12.6 APIversion:1.24 Packageversion:docker-1.12.6-61.git85d7426.el7.centos.x86_64 Goversion:go1.8.3 Gitcommit:85d7426 /1 .12.6 Built:TueOct2415:40:212017 OS /Arch :linux /amd64 Server: Version:1.12.6 APIversion:1.24 Packageversion:docker-1.12.6-61.git85d7426.el7.centos.x86_64 Goversion:go1.8.3 Gitcommit:85d7426 /1 .12.6 Built:TueOct2415:40:212017 OS /Arch :linux /amd64 [root@Docker~] # [root@Docker~] #dockerinfo Containers:0 Running:0 Paused:0 Stopped:0 Images:0 ServerVersion:1.12.6 StorageDriver:devicemapper PoolName:docker-253:0-270443527-pool PoolBlocksize:65.54kB BaseDeviceSize:10.74GB BackingFilesystem:xfs Data file : /dev/loop0 Metadata file : /dev/loop1 DataSpaceUsed:11.8MB DataSpaceTotal:107.4GB DataSpaceAvailable:102.8GB MetadataSpaceUsed:581.6kB MetadataSpaceTotal:2.147GB MetadataSpaceAvailable:2.147GB ThinPoolMinimumFreeSpace:10.74GB UdevSyncSupported: true DeferredRemovalEnabled: true DeferredDeletionEnabled: true DeferredDeletedDeviceCount:0 Dataloop file : /var/lib/docker/devicemapper/devicemapper/data WARNING:Usageofloopbackdevicesisstronglydiscouraged for productionuse.Use`--storage-optdm.thinpooldev`tospecifyacustomblockstoragedevice. Metadataloop file : /var/lib/docker/devicemapper/devicemapper/metadata LibraryVersion:1.02.107-RHEL7(2015-10-14) LoggingDriver:journald CgroupDriver:systemd Plugins: Volume: local Network:hostbridgeoverlaynull Swarm:inactive Runtimes:docker-runcrunc DefaultRuntime:docker-runc SecurityOptions:seccomp KernelVersion:3.10.0-327.el7.x86_64 OperatingSystem:CentOSLinux7(Core) OSType:linux Architecture:x86_64 NumberofDockerHooks:3 CPUs:2 TotalMemory:1.954GiB Name:localhost.localdomain ID:7NNL:RVYC:M6QY:CP2P:5SNV:3N25:U45I:TUWG:Y4NK:7H4R:CN2B:3E67 DockerRootDir: /var/lib/docker DebugMode(client): false DebugMode(server): false Registry:https: //index .docker.io /v1/ WARNING:bridge-nf-call-iptablesisdisabled WARNING:bridge-nf-call-ip6tablesisdisabled InsecureRegistries: 127.0.0.0 /8 Registries:docker.io(secure) CentOS6 yum install epel-release-y yum install lxclibcgroupdevice-map*-y yum install docker-io-y [root@Docker~] #dockerversion Clientversion:1.7.1 ClientAPIversion:1.19 Goversion(client):go1.4.2 Gitcommit(client):786b29d /1 .7.1 OS /Arch (client):linux /amd64 Serverversion:1.7.1 ServerAPIversion:1.19 Goversion(server):go1.4.2 Gitcommit(server):786b29d /1 .7.1 OS /Arch (server):linux /amd64 [root@Docker~] # [root@Docker~] #dockerinfo Containers:0 Images:0 StorageDriver:devicemapper PoolName:docker-253:0-130626-pool PoolBlocksize:65.54kB BackingFilesystem:extfs Data file : /dev/loop0 Metadata file : /dev/loop1 DataSpaceUsed:305.7MB DataSpaceTotal:107.4GB DataSpaceAvailable:11.1GB MetadataSpaceUsed:729.1kB MetadataSpaceTotal:2.147GB MetadataSpaceAvailable:2.147GB UdevSyncSupported: true DeferredRemovalEnabled: false Dataloop file : /var/lib/docker/devicemapper/devicemapper/data Metadataloop file : /var/lib/docker/devicemapper/devicemapper/metadata LibraryVersion:1.02.117-RHEL6(2016-12-13) ExecutionDriver:native-0.2 LoggingDriver:json- file KernelVersion:2.6.32-431.el6.x86_64 OperatingSystem:<unknown> CPUs:1 TotalMemory:1.834GiB Name:localhost.localdomain ID:SKZZ:TYST:LUEG:N66O:364P:7YRG:GQ3W:ODYR:G476:JSHB:I2HF:3A5W 说明:docker默认的存储驱动类型为devicemapper,docker-ce默认的存储驱动类型为overlay2。 2.搜索并下载nginx容器 [root@Docker~] #dockersearchnginx [root@Docker~] #dockerpulldocker.io/nginx 3.将容器的80端映射到本机的8080 [root@Docker~] #dockerrun--name=nginx-itd-p8080:80docker.io/nginxbash 或者 [root@Docker~] #dockerrun--namenginx-itd-p8080:80docker.io/nginx/bin/bash 或者 [root@Docker~] #dockerrun--name=nginx-itd-p8080:80docker.io/nginx 注:--privileged给予管理员权限,--restart=always容器随docker宿主机的启动而启动;CentOS7安装服务器后若使用systemctl工具启动创建容器时需要执行 /usr/sbin/init 或 /sbin/init 命令。 4.查看docker容器进程 [root@Docker~] #dockerps-a CONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMES 26ae21c8bddddocker.io /nginx "bash" 5secondsagoUp4seconds0.0.0.0:8080->80 /tcp nginx 5.查看docker容器IP地址 [root@Docker~] #dockerinspect0a9db4be695b|grep-iip "HostIp" : "" , "IpcMode" : "" , "LinkLocalIPv6Address" : "" , "LinkLocalIPv6PrefixLen" :0, "HostIp" : "0.0.0.0" , "SecondaryIPAddresses" :null, "SecondaryIPv6Addresses" :null, "GlobalIPv6Address" : "" , "GlobalIPv6PrefixLen" :0, "IPAddress" : "172.17.0.2" , "IPPrefixLen" :16, "IPv6Gateway" : "" , "IPAMConfig" :null, "IPAddress" : "172.17.0.2" , "IPPrefixLen" :16, "IPv6Gateway" : "" , "GlobalIPv6Address" : "" , "GlobalIPv6PrefixLen" :0, [root@Docker~] # 6.浏览器访问测试: http: //IP :8080 1 2 3 4 5 6 7 8 9 10 11 12 7.登录进入容器中,更改默认的测试页面内容 [root@Docker~] #dockerexec-it0a9db4be695b/bin/bash root@0a9db4be695b:/ #cd/usr/share/nginx/html/ root@0a9db4be695b: /usr/share/nginx/html #ls 50x.htmlindex.html root@0a9db4be695b: /usr/share/nginx/html #echo"<h1>www.hello.com</h1>">index.html root@0a9db4be695b: /usr/share/nginx/html #exit exit [root@Docker~] # 8.浏览器访问测试: http: //IP :8080 1 2 9.强制删除所有的虚拟机 [root@Docker~] #dockerrm-f`dockerps-aq` 本文转自 dengaosky 51CTO博客,原文链接:http://blog.51cto.com/dengaosky/2045168,如需转载请自行联系原作者

优秀的个人博客,低调大师

elasticsearch5.6.3插件部署

需要注意的是,5.x和2.x插件方面改动很大。参考:https://www.elastic.co/blog/running-site-plugins-with-elasticsearch-5-0。因为安全原因,原先的head、bigdesk、krof等插件不再支持直接在plugin目录下解压使用;新出个安全监控插件X-Pack,涵盖了marvel等功能。后面我会选择性地安装一些插件。 1.elasticsearch 安装和2.x一样,没什么可说的。安装后启动会遇到一系列问题,参考:http://www.cnblogs.com/jiu0821/p/7683322.html 2.kibana5.6.3 安装和2.x一样,只是界面大为改观,且内置了sense插件。 实际安装过程中遇到一个问题,kibana日志正常但不可用其他主机通过浏览器访问,端口是开放的。使用命令:netstat -an | grep LISTEN看到5601端口绑定的地址是127.0.0.1,由于本地hosts配置问题,无法通过本机实际ip访问。两个方案,一个就是配置本地hosts问题,另一个可以修改kibana的配置文件,把里面的host改为“host:0.0.0.0",解决。(0.0.0.0代表使用所有地址) 3.cerebro(krof) realease版本下载地址:https://github.com/lmenezes/cerebro/releases 解压安装包到一个自定义目录(非elasticsearch或kibana的plugin目录),Run bin/cerebro(or bin/cerebro.bat if on Windows)。界面会让你输入es域名,连接即可。 4.analysis-ik 下载地址:https://github.com/medcl/elasticsearch-analysis-ik/releases 解压,复制到插件目录:mv elasticsearch-analysis-ik-5.6.3elasticsearch安装目录/plugins/analysis-ik 这个时候需要重启 elasticsearch 插件才能生效(这个可以等设置好词库再重启也可以) 5.bigdesk 下载地址:https://github.com/hlstudio/bigdesk 修改es配置文件,添加参数: http.cors.enabled: true http.cors.allow-origin: "*" 解压安装包到一个自定义目录(非elasticsearch或kibana的plugin目录)。这个插件需要运行在一个web server上,es没有提供,我们需要自己搭建。 安装node,然后进入bigdesk目录下的_site目录,可以看到有个index.html。这里暂且使用Python提供的一个简易web server,执行: python -m SimpleHTTPServer [port] 注:这里的port可不填,也可以填自己指定的端口号 然后就可以用浏览器访问了,访问http://ip:port。 本文转自 jiu~ 博客园博客,原文链接:http://www.cnblogs.com/jiu0821/p/7695149.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

hadoop的部署以及应用

1.基础环境 1 2 3 4 5 6 7 8 9 10 [hadoop@master~]$ cat /etc/redhat-release CentOSLinuxrelease7.2.1511(Core) [hadoop@master~]$ [hadoop@master~]$getenforce Disabled [hadoop@master~]$systemctlstatusfirewalld ●firewalld.service-firewalld-dynamicfirewalldaemon Loaded:loaded( /usr/lib/systemd/system/firewalld .service;disabled;vendorpreset:enabled) Active:inactive(dead) [hadoop@master~]$ 2.IP以及对应节点 IP 主机名 hadoop node hadoop 进程名称 192.168.56.100 master master namenode,jobtracker 192.168.56.101 slave1 slave datanode,tasktracker 192.168.56.102 slave2 slave datanode,tasktracker 192.168.56.103 slave3 slave datanode,tasktracker 1 2 3 4 5 6 [hadoop@master~] #cat/etc/hosts 192.168.56.100Master 192.168.56.101slave1 192.168.56.102slave2 192.168.56.103slave3 [hadoop@master~] # 3.增加hadoop用户,所有节点 1 2 useradd hadoop echo hadoop| passwd --stdinhadoop 4.jdk 1 2 3 4 5 6 7 8 9 10 11 12 [hadoop@slave1application] #ll total4 lrwxrwxrwx1rootroot24Jul1001:35jdk-> /application/jdk1 .8.0_60 drwxr-xr-x8rootroot4096Aug52015jdk1.8.0_60 [hadoop@slave1application] #pwd /application [hadoop@slave1application] # [hadoop@master~] #java-version javaversion "1.8.0_60" Java(TM)SERuntimeEnvironment(build1.8.0_60-b27) JavaHotSpot(TM)64-BitServerVM(build25.60-b23,mixedmode) [hadoop@master~] # 5.master(192.168.56.100)上的hadoop用户可以ssh所有slave节点的hadoop用户下 6.设置hadoop安装路径 以及环境变量(所有节点) 1 2 3 4 5 6 7 su -hadoop tar xfhadoop-2.7.0tar.gz /home/hadoop/hadoop-2 .7.0 vi /etc/profile 添加hadoop环境变量 export HADOOP_HOME= /home/hadoop/hadoop-2 .7.0 export PATH=$PATH:$HADOOP_HOME /bin source /etc/profile 7.修改hadoop的环境的Java环境变量 1 2 3 4 /home/hadoop/hadoop-2 .7.0 /etc/hadoop vi hadoop- env .sh添加 ###JAVA_HOME export JAVA_HOME= /application/jdk/ 8.修改hadoop的配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 cd /home/hadoop/hadoop-2 .7.0 /etc/hadoop 1. ############################## [hadoop@masterhadoop]$ cat core-site.xml <?xmlversion= "1.0" encoding= "UTF-8" ?> <?xml-stylesheet type = "text/xsl" href= "configuration.xsl" ?> <!-- LicensedundertheApacheLicense,Version2.0(the "License" ); youmaynotusethis file except in compliancewiththeLicense. YoumayobtainacopyoftheLicenseat http: //www .apache.org /licenses/LICENSE-2 .0 Unlessrequiredbyapplicablelaworagreedto in writing,software distributedundertheLicenseisdistributedonan "ASIS" BASIS, WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied. SeetheLicense for thespecificlanguagegoverningpermissionsand limitationsundertheLicense.SeeaccompanyingLICENSE file . --> <!--Putsite-specificpropertyoverrides in this file .--> <configuration> <property> <name>fs.default.name< /name > <value>hdfs: //master :9000< /value > < /property > <property> <name>hadoop.tmp. dir < /name > <value> /home/hadoop/tmp < /value > < /property > < /configuration > [hadoop@masterhadoop]$ 2. ###################################(默认不存在拷贝个模板即可) [hadoop@masterhadoop]$ cat mapred-site.xml <?xmlversion= "1.0" ?> <?xml-stylesheet type = "text/xsl" href= "configuration.xsl" ?> <!-- LicensedundertheApacheLicense,Version2.0(the "License" ); youmaynotusethis file except in compliancewiththeLicense. YoumayobtainacopyoftheLicenseat http: //www .apache.org /licenses/LICENSE-2 .0 Unlessrequiredbyapplicablelaworagreedto in writing,software distributedundertheLicenseisdistributedonan "ASIS" BASIS, WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied. SeetheLicense for thespecificlanguagegoverningpermissionsand limitationsundertheLicense.SeeaccompanyingLICENSE file . --> <!--Putsite-specificpropertyoverrides in this file .--> <configuration> <property> <name>mapred.job.tracker< /name > <value>master:9001< /value > < /property > <property> <name>mapred. local . dir < /name > <value> /home/hadoop/tmp < /value > < /property > < /configuration > [hadoop@masterhadoop]$ 3. ######################################### [hadoop@masterhadoop]$ cat hdfs-site.xml <?xmlversion= "1.0" encoding= "UTF-8" ?> <?xml-stylesheet type = "text/xsl" href= "configuration.xsl" ?> <!-- LicensedundertheApacheLicense,Version2.0(the "License" ); youmaynotusethis file except in compliancewiththeLicense. YoumayobtainacopyoftheLicenseat http: //www .apache.org /licenses/LICENSE-2 .0 Unlessrequiredbyapplicablelaworagreedto in writing,software distributedundertheLicenseisdistributedonan "ASIS" BASIS, WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied. SeetheLicense for thespecificlanguagegoverningpermissionsand limitationsundertheLicense.SeeaccompanyingLICENSE file . --> <!--Putsite-specificpropertyoverrides in this file .--> <configuration> <property> <name>dfs.name. dir < /name > <value> /home/hadoop/name1 , /home/hadoop/name2 , /home/hadoop/name3 < /value > <description>< /description > < /property > <property> <name>dfs.data. dir < /name > <value> /home/hadoop/data1 , /home/hadoop/data2 , /home/hadoop/data3 < /value > <description>< /description > < /property > <property> <name>dfs.replication< /name > <value>3< /value > < /property > < /configuration > [hadoop@masterhadoop]$ [hadoop@masterhadoop]$ cat masters master [hadoop@masterhadoop]$ cat slaves slave1 slave2 slave3 [hadoop@masterhadoop]$ 9.分发到slave节点 1 2 3 scp -r /home/hadoop/hadoop-2 .7.0slave1: /home/hadoop/ scp -r /home/hadoop/hadoop-2 .7.0slave2: /home/hadoop/ scp -r /home/hadoop/hadoop-2 .7.0slave3: /home/hadoop/ 10.master 节点测试 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 /home/hadoop/name1 /home/hadoop/name2 /home/hadoop/name3 这三个目录不要创建,如果创建会提示 重新reload cd /home/hadoop/hadoop-2 .7.0 [hadoop@masterhadoop-2.7.0]$. /bin/hadoop namenode- format DEPRECATED:Useofthisscripttoexecutehdfs command isdeprecated. Insteadusethehdfs command for it. 17 /07/10 02:57:34INFOnamenode.NameNode:STARTUP_MSG: /************************************************************ STARTUP_MSG:StartingNameNode STARTUP_MSG:host=Master /192 .168.56.100 STARTUP_MSG:args=[- format ] STARTUP_MSG:version=2.7.0 STARTUP_MSG:classpath= /home/hadoop/hadoop-2 .7.0 /etc/hadoop : /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hadoop-auth-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-recipes-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-collections-3 .2.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/xmlenc-0 .52.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/htrace-core-3 .1.0-incubating.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-beanutils-core-1 .8.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-digester-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsp-api-2 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/httpcore-4 .2.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/mockito-all-1 .8.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hadoop-annotations-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/junit-4 .11.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-configuration-1 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/api-util-1 .0.0-M20.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jaxb-impl-2 .2.3-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-json-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jettison-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-jaxrs-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hamcrest-core-1 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-net-3 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/api-asn1-api-1 .0.0-M20.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-math3-3 .1.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-xc-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/stax-api-1 .0-2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/slf4j-log4j12-1 .7.10.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/snappy-java-1 .0.4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/avro-1 .7.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/paranamer-2 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-beanutils-1 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsch-0 .1.42.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/apacheds-kerberos-codec-2 .0.0-M15.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jets3t-0 .9.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/activation-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/java-xmlbuilder-0 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/apacheds-i18n-2 .0.0-M15.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-httpclient-3 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/gson-2 .2.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jaxb-api-2 .2.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-framework-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/httpclient-4 .2.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-client-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/slf4j-api-1 .7.10.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/zookeeper-3 .4.6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-nfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-common-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs : /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xmlenc-0 .52.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/htrace-core-3 .1.0-incubating.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-daemon-1 .0.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xercesImpl-2 .9.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/netty-all-4 .0.23.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xml-apis-1 .3.04.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-nfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-collections-3 .2.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/javax .inject-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/aopalliance-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jaxb-impl-2 .2.3-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-json-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jettison-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-jaxrs-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-xc-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/stax-api-1 .0-2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-guice-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guice-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/activation-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jaxb-api-2 .2.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/zookeeper-3 .4.6-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guice-servlet-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-client-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/zookeeper-3 .4.6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-registry-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-tests-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-web-proxy-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-api-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-nodemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-client-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/javax .inject-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/aopalliance-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/hadoop-annotations-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/junit-4 .11.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/hamcrest-core-1 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/snappy-java-1 .0.4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-guice-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/guice-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/avro-1 .7.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/paranamer-2 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/guice-servlet-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-examples-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-core-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-app-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /contrib/capacity-scheduler/ *.jar: /home/hadoop/hadoop-2 .7.0 /contrib/capacity-scheduler/ *.jar STARTUP_MSG:build=Unknown-rUnknown;compiledby 'root' on2015-05-27T13:56Z STARTUP_MSG:java=1.8.0_60 ************************************************************/ 17 /07/10 02:57:34INFOnamenode.NameNode:registeredUNIXsignalhandlers for [TERM,HUP,INT] 17 /07/10 02:57:34INFOnamenode.NameNode:createNameNode[- format ] 17 /07/10 02:57:35WARNcommon.Util:Path /home/hadoop/name1 shouldbespecifiedasaURI in configurationfiles.Pleaseupdatehdfsconfiguration. 17 /07/10 02:57:35WARNcommon.Util:Path /home/hadoop/name2 shouldbespecifiedasaURI in configurationfiles.Pleaseupdatehdfsconfiguration. 17 /07/10 02:57:35WARNcommon.Util:Path /home/hadoop/name3 shouldbespecifiedasaURI in configurationfiles.Pleaseupdatehdfsconfiguration. 17 /07/10 02:57:35WARNcommon.Util:Path /home/hadoop/name1 shouldbespecifiedasaURI in configurationfiles.Pleaseupdatehdfsconfiguration. 17 /07/10 02:57:35WARNcommon.Util:Path /home/hadoop/name2 shouldbespecifiedasaURI in configurationfiles.Pleaseupdatehdfsconfiguration. 17 /07/10 02:57:35WARNcommon.Util:Path /home/hadoop/name3 shouldbespecifiedasaURI in configurationfiles.Pleaseupdatehdfsconfiguration. Formattingusingclusterid:CID-77e0896d-bda2-49f1-8127-c5343f1c52c9 17 /07/10 02:57:35INFOnamenode.FSNamesystem:NoKeyProviderfound. 17 /07/10 02:57:35INFOnamenode.FSNamesystem:fsLockisfair: true 17 /07/10 02:57:35INFOblockmanagement.DatanodeManager:dfs.block.invalidate.limit=1000 17 /07/10 02:57:35INFOblockmanagement.DatanodeManager:dfs.namenode.datanode.registration.ip- hostname -check= true 17 /07/10 02:57:35INFOblockmanagement.BlockManager:dfs.namenode.startup.delay.block.deletion.secis set to000:00:00:00.000 17 /07/10 02:57:36INFOblockmanagement.BlockManager:Theblockdeletionwillstartaround2017Jul1002:57:36 17 /07/10 02:57:36INFOutil.GSet:Computingcapacity for mapBlocksMap 17 /07/10 02:57:36INFOutil.GSet:VM type =64-bit 17 /07/10 02:57:36INFOutil.GSet:2.0%maxmemory966.7MB=19.3MB 17 /07/10 02:57:36INFOutil.GSet:capacity=2^21=2097152entries 17 /07/10 02:57:36INFOblockmanagement.BlockManager:dfs.block.access.token. enable = false 17 /07/10 02:57:36INFOblockmanagement.BlockManager:defaultReplication=3 17 /07/10 02:57:36INFOblockmanagement.BlockManager:maxReplication=512 17 /07/10 02:57:36INFOblockmanagement.BlockManager:minReplication=1 17 /07/10 02:57:36INFOblockmanagement.BlockManager:maxReplicationStreams=2 17 /07/10 02:57:36INFOblockmanagement.BlockManager:shouldCheckForEnoughRacks= false 17 /07/10 02:57:36INFOblockmanagement.BlockManager:replicationRecheckInterval=3000 17 /07/10 02:57:36INFOblockmanagement.BlockManager:encryptDataTransfer= false 17 /07/10 02:57:36INFOblockmanagement.BlockManager:maxNumBlocksToLog=1000 17 /07/10 02:57:36INFOnamenode.FSNamesystem:fsOwner=hadoop(auth:SIMPLE) 17 /07/10 02:57:36INFOnamenode.FSNamesystem:supergroup=supergroup 17 /07/10 02:57:36INFOnamenode.FSNamesystem:isPermissionEnabled= true 17 /07/10 02:57:36INFOnamenode.FSNamesystem:HAEnabled: false 17 /07/10 02:57:36INFOnamenode.FSNamesystem:AppendEnabled: true 17 /07/10 02:57:36INFOutil.GSet:Computingcapacity for mapINodeMap 17 /07/10 02:57:36INFOutil.GSet:VM type =64-bit 17 /07/10 02:57:36INFOutil.GSet:1.0%maxmemory966.7MB=9.7MB 17 /07/10 02:57:36INFOutil.GSet:capacity=2^20=1048576entries 17 /07/10 02:57:36INFOnamenode.FSDirectory:ACLsenabled? false 17 /07/10 02:57:36INFOnamenode.FSDirectory:XAttrsenabled? true 17 /07/10 02:57:36INFOnamenode.FSDirectory:Maximumsizeofanxattr:16384 17 /07/10 02:57:36INFOnamenode.NameNode:Caching file namesoccuring more than10 times 17 /07/10 02:57:36INFOutil.GSet:Computingcapacity for mapcachedBlocks 17 /07/10 02:57:36INFOutil.GSet:VM type =64-bit 17 /07/10 02:57:36INFOutil.GSet:0.25%maxmemory966.7MB=2.4MB 17 /07/10 02:57:36INFOutil.GSet:capacity=2^18=262144entries 17 /07/10 02:57:36INFOnamenode.FSNamesystem:dfs.namenode.safemode.threshold-pct=0.9990000128746033 17 /07/10 02:57:36INFOnamenode.FSNamesystem:dfs.namenode.safemode.min.datanodes=0 17 /07/10 02:57:36INFOnamenode.FSNamesystem:dfs.namenode.safemode.extension=30000 17 /07/10 02:57:36INFOmetrics.TopMetrics:NNTopconf:dfs.namenode. top .window.num.buckets=10 17 /07/10 02:57:36INFOmetrics.TopMetrics:NNTopconf:dfs.namenode. top .num. users =10 17 /07/10 02:57:36INFOmetrics.TopMetrics:NNTopconf:dfs.namenode. top .windows.minutes=1,5,25 17 /07/10 02:57:36INFOnamenode.FSNamesystem:Retrycacheonnamenodeisenabled 17 /07/10 02:57:36INFOnamenode.FSNamesystem:Retrycachewilluse0.03oftotalheapandretrycacheentryexpiry time is600000millis 17 /07/10 02:57:36INFOutil.GSet:Computingcapacity for mapNameNodeRetryCache 17 /07/10 02:57:36INFOutil.GSet:VM type =64-bit 17 /07/10 02:57:36INFOutil.GSet:0.029999999329447746%maxmemory966.7MB=297.0KB 17 /07/10 02:57:36INFOutil.GSet:capacity=2^15=32768entries 17 /07/10 02:57:36INFOnamenode.FSImage:AllocatednewBlockPoolId:BP-467031090-192.168.56.100-1499626656612 17 /07/10 02:57:36INFOcommon.Storage:Storagedirectory /home/hadoop/name1 hasbeensuccessfullyformatted. 17 /07/10 02:57:36INFOcommon.Storage:Storagedirectory /home/hadoop/name2 hasbeensuccessfullyformatted. 17 /07/10 02:57:36INFOcommon.Storage:Storagedirectory /home/hadoop/name3 hasbeensuccessfullyformatted. 17 /07/10 02:57:36INFOnamenode.NNStorageRetentionManager:Goingtoretain1imageswithtxid>=0 17 /07/10 02:57:36INFOutil.ExitUtil:Exitingwithstatus0 17 /07/10 02:57:37INFOnamenode.NameNode:SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG:ShuttingdownNameNodeatMaster /192 .168.56.100 ************************************************************/ [hadoop@masterhadoop-2.7.0]$ 11.启动服务 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [hadoop@mastersbin]$ pwd /home/hadoop/hadoop-2 .7.0 /sbin [hadoop@mastersbin]$ [hadoop@mastersbin]$. /start-all .sh ThisscriptisDeprecated.Insteadusestart-dfs.shandstart-yarn.sh Startingnamenodeson[master] master:startingnamenode,loggingto /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-namenode-master .out slave3:startingdatanode,loggingto /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave3 .out slave2:startingdatanode,loggingto /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave2 .out slave1:startingdatanode,loggingto /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave1 .out Startingsecondarynamenodes[0.0.0.0] 0.0.0.0:startingsecondarynamenode,loggingto /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-secondarynamenode-master .out startingyarndaemons startingresourcemanager,loggingto /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-resourcemanager-master .out slave3:startingnodemanager,loggingto /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave3 .out slave2:startingnodemanager,loggingto /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave2 .out slave1:startingnodemanager,loggingto /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave1 .out 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [hadoop@mastersbin]$ netstat -lntup (Notallprocessescouldbeidentified,non-ownedprocessinfo willnotbeshown,youwouldhavetoberoottoseeitall.) ActiveInternetconnections(onlyservers) ProtoRecv-QSend-QLocalAddressForeignAddressStatePID /Program name tcp00192.168.56.100:90000.0.0.0:*LISTEN4405 /java tcp000.0.0.0:500900.0.0.0:*LISTEN4606 /java tcp000.0.0.0:500700.0.0.0:*LISTEN4405 /java tcp000.0.0.0:220.0.0.0:*LISTEN- tcp00127.0.0.1:250.0.0.0:*LISTEN- tcp600:::22:::*LISTEN- tcp600:::8088:::*LISTEN4757 /java tcp600::1:25:::*LISTEN- tcp600:::8030:::*LISTEN4757 /java tcp600:::8031:::*LISTEN4757 /java tcp600:::8032:::*LISTEN4757 /java tcp600:::8033:::*LISTEN4757 /java [hadoop@mastersbin]$ http://192.168.56.100:50070/dfshealth.html#tab-overview http://192.168.56.103:8042/node/allApplications http://192.168.56.100:50090/status.html 本文转自 小小三郎1 51CTO博客,原文链接:http://blog.51cto.com/wsxxsl/1945709,如需转载请自行联系原作者

优秀的个人博客,低调大师

CentOS7 部署Kubernetes

一、Installing Docker yuminstall-ydocker systemctlenabledocker&&systemctlstartdocker 二、Installing kubeadm, kubelet and kubectl cat<<EOF>/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpghttps://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce0 yuminstall-ykubeletkubeadmkubectl systemctlenablekubelet&&systemctlstartkubelet 1.关闭selinux 2.Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensurenet.bridge.bridge-nf-call-iptablesis set to 1 in yoursysctlconfig, e.g. cat<<EOF>/etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 EOF sysctl--system 参考https://kubernetes.io/docs/setup/independent/install-kubeadm/ 本文转自不要超过24个字符博客51CTO博客,原文链接http://blog.51cto.com/cstsncv/2060277如需转载请自行联系原作者 cstsncv

优秀的个人博客,低调大师

hbase,zookeeper安装部署(二)

一、Hbase基础 1.概念 Hbase是一个在HDFS上开发的面向列分布式数据库,用于实时地随机访问超大规模数据集,它是一个面向列族的存储器。由于调优和存储都是在列族这个层次上进行,最好所有列族的成员都有相同的“访问模式”和大小特征 2.区域 hbase自动把表水平划分“区域”(region)。 每个区域由表中行的子集构。每个区域由它所属于表,它所包含的第一行及其最后一行(不包括这行)来表示 区域是在hbase集群上分布数据最小单位。用这种方式,一个因为太大而无法放在单台服务器的表会被放到服务器集群上,其中每个节点都负责管理表所有区域的一个子集。 3.实现 Hbase主控制(master):负责启动(bootstrap)一个全新安装,把区域分配给注册的regionserver,恢复regionserver的故障。master的负载很轻。 regionsever:负责零个或多个区域的管理以及响应客户端的读写请。还负责区域划分并通知Hbase master有了新的子区域(daughter region),这样主控机就可以把父区域设为离线,并用子区域替换父区域。 Hbase依赖于ZooKeeper。默认情况下,它管理一个ZooKeeper实例,作为集群的“权威机构”(authority)。Hbase负责根目录表(root catalog table)的位置,当前集群主控机地址等重要信息管理。 ZooKeeper上管理分配事务状诚有助于恢复能够从崩溃服务器遗留的状态开始继续分配。 启动一个客户端到HBase集群连接时,客户端必须至少拿到集群所传递的ZooKeeper集合体(ensemble)的位置。这样,客户端才能访问Zookeeper的层次结构,从而了解集群的属性。如服务器位置 4.多种文件系统接口的实现 HBase通过hadoop的文件系统API来持久久化存储数据。 有多种文件系统接口的实现:一种用于本地化文件系统;一种用于KFS文件系统,Amazon S3以及HDFS。多数人使用HDFS作为存储来运行HBase,目前我公司就这样。 5.运行中的HBase Hbase内部保留名为-ROOT-和.META.的特殊目录表(catalog table)。它们维护着当前集群上所有区域的表,状态和位置。 -ROOT-表:包含.META.表的区域列表。 .META.表:包含所有用户空间区域(user-space region)的列表。 表中的项使用区域名作为键。 区域名由表名,起始行,创建时间戳进行哈希后的结果组成 6.与区域regionsver交互过程 连接到ZooKeeper集群上的客户端首先查找-ROOT-的位置,然后客户端通过-ROOT-获取所请求行所在范围所属.META.区域位置。客户端接着查找.META.区域来获取用户空间区域所在节点及其位置。接着客户端就可直接和管理那个区域的regionserver进行交互了 7.每个行操作可能要访问三次远程节点。节省代价,可利用缓存 客户端会缓存它们遍历-ROOT-时获取的信息和.META位置以有用户空间区域的开始行和结束行。这样,以后不需要访问.META.表也能得知区域存放的位置。当发生错误时--却区域被移动了,客户端会再去查看.META.获取区域新位置。如果.META.区域也被移动了,客户端就去查看-ROOT- 8.regionsever写操作 到达Regionsver的写操作首先被追架到“提交日志”(commit log)中,然后被加入内存的memstore。如果memstore满,它的内容被“涮入”(flush)文件系统 9、regionserver故障恢复 提交日志存放在HDFS,即使一个regionserver崩溃,主控机会根据区域死掉的regionserver的提交日志进行分割。重新分配后,在打开并使用死掉的regionserver上的区域之前,这些区域会找到属于它们的从被分割提交日志中得到文件,其中包含还没有被持久化存储的更新。这些更新会被“重做”(replay)以使区域恢复到服务器失败前状态 10、regionserver读操作 在读时候,首先查区域memstore。如果memstore找到所要征曾版本,查询结束了。否则,按照次序从新到旧松果“涮新文件"(flush file),直到找到满足查询的版本,或所有刷新文件都处理完止。 11、regionsever监控进程 一个后台进程负责在刷新文件个数到达一个阀值时压缩他们 一个独立的进程监控着刷新文件的大小,一旦文件大小超出预先设定的最大值,便对区域进行分割 二、ZooKeeper基础 1、概念 ZooKeeper是hadoop的分布式协调服务 2、ZooKeeper具有特点 是简单的 是富有表现力 zookeeper是的基本操作是一组丰富的”构件“(bulding block),可用于实现多种协调数据结构和协议。如:分布式队列,分布式锁和一组节点的”领导者选举“(lead election) 具有高可用性 运行一组机器上,设计上具有高可用性,应用程序可以完全依赖于它。zookeeper可帮助系统避免出现单点故障,因此可用于构建一个可靠的程序 采用松藕合交互方 支技交互过程,参与者不需要彼此了解。 一个资源库 提供一个通用协调的模式的实现方法开源共享库 对于写操作为主的工作负裁来说,zookeeper的基准吞吐量已经超过了每秒10 000 个操作;对于以读操作来说,吞吐量更是高出好几倍 3.ZooKeeper中组成员关系 看作一个具有高可用性特征的文件系统。这个文件系统没有文件和目录,统一使用”节点“(node)的概念,称为znode,作为保存数据的容器(如同文件),也可作为保存其他znode的容器。一个以组名为节点为的znode作为父节点,然后以组成员(服务器名)为节点名创建的作为子节点的znode. 4.数据模型 ZooKeeper维护着一个树形层次结构,znode可用于存储数据,并且与之关联的ACL。被设计用来实现协调服务(这类服务通常使用小数据文件),而不是用于大容量数据存储。一个znode能存储数据被限制在1MB以内。 ZooKeeper的数据访问具有原子性。 客户端端读取一个znode的数据时,要么读到所有数据,要么读操作失败,不会只读到部分数据。同样一个写操作将替换znode存储的所有数据。ZooKeeper会保证写操作不成就失败,不会出现部分写之类的情况,也就不会出现只保存客户端所写部分数据情部。ZooKeeper不支技添操作。这些特征与HDF所不同的。HDFS被设计用于大容量数据存储,支技流式数据访问和添加操作。 5.znode类型 znode有两种类型:短暂的和持久的。znode的类型在创建时被确定并且之后不能再修改。在创建短暂znode客户端会话结束时,zookeeper会将短暂znode删除。持久znode不依赖于客户端会话,只有当客户端(不一定是创建它的那个客户端)明确要删除该持久znode时才会被删除。短暂znode不可以有子节点,即使短暂子节点 6.顺序号 顺序(sequential)znode是指名称包含zookeeper指定顺序的znode.如果创建znode时设置了顺序标识,那zanode名称之后会附加一个值,这个值是由一个单调递增计数器(由父节点维护)所有添加。如如znode的名字/a/b-3 7.观察 znode以某种方式发生变化时,”观察“(watch)机制可以让客户端得到通知。可以针对zookeeper服务的操作来设置观察,该服务的其他操作可以触发观察 二、基本环境准备(参考上一章) 1.机器准备 1 2 3 4 IP地址主机名扮演的角色 10.1 . 2.208 vm13master 10.1 . 2.215 vm7slave 10.1 . 2.216 vm8slave 2.系统版本 1 CentOSrelease 6.5 3.时间同步 4.防火墙关闭 5.创建hadoop用户和hadoop用户组 6.修改hosts 1 2 3 4 5 6 7 8 127.0 . 0.1 localhostlocalhost.localdomainlocalhost4localhost4.localdomain4 :: 1 localhostlocalhost.localdomainlocalhost6localhost6.localdomain6 10.1 . 2.214 master 10.1 . 2.215 slave-one 10.1 . 2.216 slave-two 10.1 . 2.208 vm13 10.1 . 2.197 vm7 10.1 . 2.198 vm8 7.修改文件句柄 8.JVM环境准备 三、hbase安装配置 1.软件准备 1 2 hbase- 0.94 . 16 .tar.gz jdk1. 7 .0_25.tar.gz 2.解压包 1 2 [hadoop@vm13local]$ls-ld/usr/local/hbase- 0.94 . 16 / drwxr-xr-x. 11 hadoophadoop 4096 Jan 13 13 : 11 /usr/local/hbase- 0.94 . 16 / 3.配置文件修改 3.1hbase-env.sh 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 #指定jdk安装目录 exportJAVA_HOME=/usr/local/jdk1. 7 .0_25 #指定Hadoop配置目录,看需要性 exportHBASE_CLASSPATH=/usr/local/hadoop- 1.0 . 4 /conf #设置堆的使用量 exportHBASE_HEAPSIZE= 2000 exportHBASE_OPTS= "-XX:ThreadStackSize=2048-XX:+UseConcMarkSweepGC" #额外ssh选,默认 22 端口 exportHBASE_SSH_OPTS= "-oConnectTimeout=1-oSendEnv=HBASE_CONF_DIR-p22" #在这里先让hbase管理自带zookeeper工具,默认开启,hbase是启动依赖zookeeper exportHBASE_MANAGES_ZK= true 3.2hbase-site.xml hbase.rootdir:设置Hbase数据存放目录 hbase.cluster.distributed:启用Hbase分布模式 hbase.maste:指定Hbase master节点 hbase.zookeeper.quorum:指定Zookeeper集群几点,据说必须为奇数 hbase.zookeeper.property.dataDir:Zookeeper的data目录 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 ?xmlversion= "1.0" ?> <?xml-stylesheettype= "text/xsl" href= "configuration.xsl" ?> <!-- /** *Copyright2010TheApacheSoftwareFoundation * *LicensedtotheApacheSoftwareFoundation(ASF)underone *ormorecontributorlicenseagreements.SeetheNOTICEfile *distributedwiththisworkforadditionalinformation *regardingcopyrightownership.TheASFlicensesthisfile *toyouundertheApacheLicense,Version2.0(the *"License");youmaynotusethisfileexceptincompliance *withtheLicense.YoumayobtainacopyoftheLicenseat * *http://www.apache.org/licenses/LICENSE-2.0 * *Unlessrequiredbyapplicablelaworagreedtoinwriting,software *distributedundertheLicenseisdistributedonan"ASIS"BASIS, *WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied. *SeetheLicenseforthespecificlanguagegoverningpermissionsand *limitationsundertheLicense. */ --> <configuration> <property> <name>hbase.rootdir</name> <value>hdfs: //master:9000/hbase</value> <description>ThedirectorysharedbyRegionServers.</description> </property> <property> <name>hbase.cluster.distributed</name> <value> true </value> <description> Themodetheclusterwillbe in .Possiblevaluesare false :standaloneandpseudo-distributedsetups with managedZookeeper true :fully-distributed with unmanagedZookeeperQuorum(seehbase-env.sh) </description> </property> <property> <name>hbase.zookeeper.quorum</name> <!--<value>dcnamenode1,dchadoop1,dchadoop3,dchbase1,dchbase2</value>--> <value>vm13,vm7,vm8</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/data0/zookeeper</value> </property> <property> <name>hbase.regionserver.handler.count</name> <value> 32 </value> <description>Default: 10 . CountofRPCListenerinstancesspunuponRegionServers. Sameproperty is usedbytheMaster for countofmasterhandlers. </description> </property> <!--memStoreflushpolicy:noblockingwrites--> <property> <name>hbase.hregion.memstore.flush.size</name> <value> 134217728 </value> <description>Default: 134217728 (128MB) Memstorewillbeflushedtodisk if sizeofthememstore exceeds this numberofbytes.Value is checkedbyathreadthatruns everyhbase.server.thread.wakefrequency. </description> </property> <property> <name>hbase.regionserver.maxlogs</name> <value> 64 </value> <description>Default: 32 .</description> </property> <!-- <property> <name>hbase.regionserver.hlog.blocksize</name> <value> 67108864 </value> <description>Default is hdfsblocksize,e.g.64MB/128MB</description> </property> --> <property> <name>hbase.regionserver.optionalcacheflushinterval</name> <value> 7200000 </value> <description>Default: 3600000 ( 1 hour). Maximumamountoftimeaneditlives in memorybeforebeingautomaticallyflushed. Default 1 hour.Setitto 0 todisableautomaticflushing. </description> </property> <!--memStoreflushpolicy:blockingwrites--> <property> <name>hbase.hregion.memstore.block.multiplier</name> <value> 3 </value> <description>Default: 2 . Blockupdates if memstorehashbase.hregion.block.memstore timehbase.hregion.flush.sizebytes.Usefulpreventing runawaymemstoreduringspikes in updatetraffic.Withoutan upper-bound,memstorefillssuchthatwhenitflushesthe resultantflushfilestakealongtimetocompactorsplit,or worse,weOOME. </description> </property> <property> <name>hbase.regionserver.global.memstore.lowerLimit</name> <value> 0.35 </value> <description>Default: 0.35 . Whenmemstoresarebeingforcedtoflushtomakeroom in memory,keepflushinguntilwehit this mark.Defaultsto 35 %ofheap. Thisvalueequaltohbase.regionserver.global.memstore.upperLimitcauses theminimumpossibleflushingtooccurwhenupdatesareblockeddueto memstorelimiting. </description> </property> <property> <name>hbase.regionserver.global.memstore.upperLimit</name> <value> 0.4 </value> <description>Default: 0.4 . Maximumsizeofallmemstores in aregionserverbefore new updatesareblockedandflushesareforced.Defaultsto 40 %ofheap </description> </property> <property> <name>hbase.hstore.blockingStoreFiles</name> <value> 256 </value> <description>Default: 7 . Ifmorethan this numberofStoreFiles in anyoneStore (oneStoreFile is writtenperflushofMemStore)thenupdatesare blocked for this HRegionuntilacompaction is completed,or untilhbase.hstore.blockingWaitTimehasbeenexceeded. </description> </property> <!--splitpolicy--> <property> <name>hbase.hregion.max.filesize</name> <value> 4294967296 </value> <description>Default: 10737418240 (10G). MaximumHStoreFilesize.Ifanyoneofacolumnfamilies'HStoreFileshas growntoexceed this value,thehostingHRegion is split in two. </description> </property> <!--compactpolicy--> <property> <name>hbase.hregion.majorcompaction</name> <value> 0 </value> <description>Default: 1 day. Thetime( in miliseconds)between 'major' compactionsofall HStoreFiles in aregion.Default: 1 day. Setto 0 todisableautomatedmajorcompactions. </description> </property> <property> <name>hbase.hstore.compactionThreshold</name> <value> 5 </value> <description>Default: 3 . Ifmorethan this numberofHStoreFiles in anyoneHStore (oneHStoreFile is writtenperflushofmemstore)thenacompaction is runtorewriteallHStoreFilesfiles as one.Largernumbers putoffcompactionbutwhenitruns,ittakeslongertocomplete. </description> </property> </configuration> 3.3 regionservers 1 2 vm7 vm8 3.4将修改的好/usr/local/hbase-0.94.16整个目录复制到vm7,vm8,注意修改好属主属组hadoop权限 3.5事先创建/data0,并属主属组为hadoop 3.6 启动hbase 1 [hadoop@vm13conf]$start-hbase.sh 3.7 验证进程是否成功 1 2 3 [hadoop@vm13conf]$jps 8146 Jps 7988 HMaster 1 2 3 4 [hadoop@vm7conf]$jps 4798 Jps 4571 HQuorumPeer 4656 HRegionServer 1 2 3 4 [hadoop@vm8conf]$jps 4022 HQuorumPeer 4251 Jps 4111 HRegionServer 四、zookeeper 1.软件准备 1 zookeeper- 3.4 . 5 .tar.gz 2.解压包 1 2 3 4 5 [hadoop@vm13local]$pwd /usr/local [hadoop@vm13local]$ls-l drwxr-xr-x. 10 hadoophadoop 4096 Apr 22 2014 zookeeper 3.配置文件 3.1log4j.properties 3.1.1 配置根Logger 其语法为: log4j.rootLogger = [ level ] , appenderName1, appenderName2, … level : 是日志记录的优先级,分为OFF、FATAL、ERROR、WARN、INFO、DEBUG、ALL或者您定义的级别。Log4j建议只使用四个级别,优先级从高到低分别是ERROR、WARN、INFO、DEBUG。通过在这里定义的级别,您可以控制到应用程序中相应级别的日志信息的开关。比如在这里定 义了INFO级别,则应用程序中所有DEBUG级别的日志信息将不被打印出来。appenderName:就是指定日志信息输出到哪个地方。您可以同时指定多个输出目的地。 例如:log4j.rootLogger=info,A1,B2,C3 3.1.2 配置日志信息输出目的地 其语法为: log4j.appender.appenderName = fully.qualified.name.of.appender.class // "fully.qualified.name.of.appender.class" 可以指定下面五个目的地中的一个: 1.org.apache.log4j.ConsoleAppender(控制台) 2.org.apache.log4j.FileAppender(文件) 3.org.apache.log4j.DailyRollingFileAppender(每天产生一个日志文件) 4.org.apache.log4j.RollingFileAppender(文件大小到达指定尺寸的时候产生一个新的文件) 5.org.apache.log4j.WriterAppender(将日志信息以流格式发送到任意指定的地方) 1.ConsoleAppender选项 Threshold=WARN:指定日志消息的输出最低层次。 ImmediateFlush=true:默认值是true,意谓着所有的消息都会被立即输出。 Target=System.err:默认情况下是:System.out,指定输出控制台 2.FileAppender 选项 Threshold=WARN:指定日志消息的输出最低层次。 ImmediateFlush=true:默认值是true,意谓着所有的消息都会被立即输出。 File=mylog.txt:指定消息输出到mylog.txt文件。 Append=false:默认值是true,即将消息增加到指定文件中,false指将消息覆盖指定的文件内容。 3.DailyRollingFileAppender 选项 Threshold=WARN:指定日志消息的输出最低层次。 ImmediateFlush=true:默认值是true,意谓着所有的消息都会被立即输出。 File=mylog.txt:指定消息输出到mylog.txt文件。 Append=false:默认值是true,即将消息增加到指定文件中,false指将消息覆盖指定的文件内容。 DatePattern=''.''yyyy-ww:每周滚动一次文件,即每周产生一个新的文件。当然也可以指定按月、周、天、时和分。即对应的格式如下: 1)''.''yyyy-MM: 每月 2)''.''yyyy-ww: 每周 3)''.''yyyy-MM-dd: 每天 4)''.''yyyy-MM-dd-a: 每天两次 5)''.''yyyy-MM-dd-HH: 每小时 6)''.''yyyy-MM-dd-HH-mm: 每分钟 4.RollingFileAppender 选项 Threshold=WARN:指定日志消息的输出最低层次。 ImmediateFlush=true:默认值是true,意谓着所有的消息都会被立即输出。 File=mylog.txt:指定消息输出到mylog.txt文件。 Append=false:默认值是true,即将消息增加到指定文件中,false指将消息覆盖指定的文件内容。 MaxFileSize=100KB: 后缀可以是KB, MB 或者是 GB. 在日志文件到达该大小时,将会自动滚动,即将原来的内容移到mylog.log.1文件。 MaxBackupIndex=2:指定可以产生的滚动文件的最大数。 3.1.3 配置日志信息的格式 其语法为: 1). log4j.appender.appenderName.layout = fully.qualified.name.of.layout.class "fully.qualified.name.of.layout.class" 可以指定下面4个格式中的一个: 1.org.apache.log4j.HTMLLayout(以HTML表格形式布局), 2.org.apache.log4j.PatternLayout(可以灵活地指定布局模式), 3.org.apache.log4j.SimpleLayout(包含日志信息的级别和信息字符串), 4.org.apache.log4j.TTCCLayout(包含日志产生的时间、线程、类别等等信息) 1.HTMLLayout 选项 LocationInfo=true:默认值是false,输出java文件名称和行号 Title=my app file: 默认值是 Log4J Log Messages. 2.PatternLayout 选项 ConversionPattern=%m%n :指定怎样格式化指定的消息。 3.XMLLayout 选项 LocationInfo=true:默认值是false,输出java文件和行号 2). log4j.appender.A1.layout.ConversionPattern=%-4r %-5p %d{yyyy-MM-dd HH:mm:ssS} %c %m%n 这里需要说明的就是日志信息格式中几个符号所代表的含义: -X号: X信息输出时左对齐; %p: 输出日志信息优先级,即DEBUG,INFO,WARN,ERROR,FATAL, %d: 输出日志时间点的日期或时间,默认格式为ISO8601,也可以在其后指定格式,比如:%d{yyy MMM dd HH:mm:ss,SSS},输出类似:2002年10月18日 22:10:28,921 %r: 输出自应用启动到输出该log信息耗费的毫秒数 %c: 输出日志信息所属的类目,通常就是所在类的全名 %t: 输出产生该日志事件的线程名 %l: 输出日志事件的发生位置,相当于%C.%M(%F:%L)的组合,包括类目名、发生的线程,以及在代码中的行数。举例:Testlog4.main(TestLog4.java:10) %x: 输出和当前线程相关联的NDC(嵌套诊断环境),尤其用到像java servlets这样的多客户多线程的应用中。 %%: 输出一个"%"字符 %F: 输出日志消息产生时所在的文件名称 %L: 输出代码中的行号 %m: 输出代码中指定的消息,产生的日志具体信息 %n: 输出一个回车换行符,Windows平台为"",Unix平台为" "输出日志信息换行 可以在%与模式字符之间加上修饰符来控制其最小宽度、最大宽度、和文本的对齐方式。如: 1)%20c:指定输出category的名称,最小的宽度是20,如果category的名称小于20的话,默认的情况下右对齐。 2)%-20c:指定输出category的名称,最小的宽度是20,如果category的名称小于20的话,"-"号指定左对齐。 3)%.30c:指定输出category的名称,最大的宽度是30,如果category的名称大于30的话,就会将左边多出的字符截掉,但小于30的话也不会有空格。 4)%20.30c:如果category的名称小于20就补空格,并且右对齐,如果其名称长于30字符,就从左边交远销出的字符截掉 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 #Definesome default valuesthatcanbeoverriddenbysystemproperties zookeeper.root.logger=INFO,ROLLINGFILE zookeeper.console.threshold=INFO #zookeeper.log.dir=. zookeeper.log.dir=/data0/zookeeper/logs zookeeper.log.file=zookeeper.log zookeeper.log.threshold=INFO #zookeeper. trace log.dir=. zookeeper. trace log.dir=/data0/zookeeper/logs zookeeper. trace log.file=zookeeper_ trace .log # #ZooKeeperLoggingConfiguration # #Format is "< default threshold>(,<appender>)+ #DEFAULT:consoleappenderonly log4j.rootLogger=${zookeeper.root.logger} #Example with rollinglogfile #log4j.rootLogger=DEBUG,CONSOLE,ROLLINGFILE #Example with rollinglogfileandtracing #log4j.rootLogger=TRACE,CONSOLE,ROLLINGFILE,TRACEFILE # #LogINFOlevelandabovemessagestotheconsole # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold} log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601}[myid:%X{myid}]-%-5p[%t:%C{ 1 }@%L]-%m%n # #AddROLLINGFILEtorootLoggerto get logfileoutput #LogDEBUGlevelandabovemessagestoalogfile log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold} log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file} #Maxlogfilesizeof10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB #uncommentthenextlinetolimitnumberofbackupfiles #log4j.appender.ROLLINGFILE.MaxBackupIndex= 10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601}[myid:%X{myid}]-%-5p[%t:%C{ 1 }@%L]-%m%n # #AddTRACEFILEtorootLoggerto get logfileoutput #LogDEBUGlevelandabovemessagestoalogfile log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=${zookeeper. trace log.dir}/${zookeeper. trace log.file} log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ###Noticeweareincludinglog4j'sNDChere(%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601}[myid:%X{myid}]-%-5p[%t:%C{ 1 }@%L][%x]-%m%n 3.2 zoo.cfg zookeeper服务器集合体中,每个服务器都有一个数值型ID,服务器ID在集合体中是唯一的,并且取值范围在1到255之间。通过一个名为myid的纯文本文件设定服务器ID,这个文件保存在dataDir参数所指定的目录中 每台服务器添格格式如下 server.n=hostname:port:port n是服务器的ID。这里2个端口设置:第一个是跟随者用来连接领导者的端口;第二个端口用来被用于领导者选举。如server.2=10.1.2.198:12888:13888,服务器在3个端口上监听,2181端口用于客户端连接;对于领导者来说,12888被用于跟随者连接;13888端口被用于领导者选举阶段的其他务器连接。当一个zookeeper服务器启动时,读取myid文件用于确定自己的服务器ID,然后通过读取配置文件来确定应当在哪个端口进行听,同时确定集合体中其他服务器的网络地址。 initLimit:设定所有跟随者与领导者进行连接同步的时间范围。如果设定的时间段内,半数以上的跟随者未能完成同步,领导者宣布放弃领导地位,然后进行另外一次领导者选举。如果这情况经常发生,可通过日志中记录发现情况,表明设定的值太小 syncLimit:设定了允许一个跟随者与领导者进行同步的时间。如果在设定时间段内,一个跟随者未能完成同步,会自己重启。所有关联到跟随者客户端将连接另一个跟随者 tickTime:指定了zookeeper中基本时间单元(以毫秒为单位) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #Thenumberofmillisecondsof each tick tickTime= 2000 #Thenumberofticksthattheinitial #synchronizationphasecantake initLimit= 10 #Thenumberofticksthatcanpassbetween #sendingarequestandgettinganacknowledgement syncLimit= 5 #thedirectorywherethesnapshot is stored. # do not use /tmp for storage,/tmphere is just #examplesakes. dataDir=/data0/zookeeper/data #theportatwhichtheclientswillconnect clientPort= 2181 server. 1 =vm13: 12888 : 13888 server. 2 =vm7: 12888 : 13888 server. 3 =vm8: 12888 : 13888 # #Besuretoreadthemaintenancesectionofthe #administratorguidebeforeturningonautopurge. # #http: //zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # #Thenumberofsnapshotstoretain in dataDir #autopurge.snapRetainCount= 3 #Purgetaskinterval in hours #Setto "0" todisableautopurgefeature #autopurge.purgeInterval= 1 3.3 zookeeper-env.sh 1 2 ZOO_LOG_DIR=/data0/zookeeper/logs ZOO_LOG4J_PROP=INFO,ROLLINGFILE 五、启动 1.修改hbase-env.sh,不使用hbase自带zookeeper工具 1 exportHBASE_MANAGES_ZK= true 2.启动zookeeper,并查看日志 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [hadoop@vm13data]$zkServer.shstart JMXenabledby default Usingconfig:/usr/local/zookeeper/bin/../conf/zoo.cfg Startingzookeeper...^[[ASTARTED [hadoop@vm13data]$tail/data0/zookeeper/logs/zookeeper.log-n 30 org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException:Errorprocessing/usr/local/zookeeper/bin/../conf/zoo.cfg atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java: 121 ) atorg.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java: 101 ) atorg.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java: 78 ) Causedby:java.lang.IllegalArgumentException:serverid null is notanumber atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java: 358 ) atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java: 117 ) ... 2 more 2016 - 01 - 14 13 : 19 : 27 , 482 [myid:]-INFO[main:QuorumPeerConfig@ 101 ]-Readingconfigurationfrom:/usr/local/zookeeper/bin/../conf/zoo.cfg 2016 - 01 - 14 13 : 19 : 27 , 488 [myid:]-INFO[main:QuorumPeerConfig@ 334 ]-Defaultingtomajorityquorums 2016 - 01 - 14 13 : 19 : 27 , 489 [myid:]-ERROR[main:QuorumPeerMain@ 85 ]-Invalidconfig,exitingabnormally org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException:Errorprocessing/usr/local/zookeeper/bin/../conf/zoo.cfg atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java: 121 ) atorg.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java: 101 ) atorg.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java: 78 ) Causedby:java.lang.IllegalArgumentException:serverid null is notanumber atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java: 358 ) atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java: 117 ) ... 2 more 2016 - 01 - 14 13 : 19 : 36 , 661 [myid:]-INFO[main:QuorumPeerConfig@ 101 ]-Readingconfigurationfrom:/usr/local/zookeeper/bin/../conf/zoo.cfg 2016 - 01 - 14 13 : 19 : 36 , 666 [myid:]-INFO[main:QuorumPeerConfig@ 334 ]-Defaultingtomajorityquorums 2016 - 01 - 14 13 : 19 : 36 , 668 [myid:]-ERROR[main:QuorumPeerMain@ 85 ]-Invalidconfig,exitingabnormally org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException:Errorprocessing/usr/local/zookeeper/bin/../conf/zoo.cfg atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java: 121 ) atorg.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java: 101 ) atorg.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java: 78 ) Causedby:java.lang.IllegalArgumentException:serverid null is notanumber atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java: 358 ) atorg.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java: 117 ) ... 2 more 从日志查看可以知道,Zookeeper的启动入口org.apache.zookeeper.server.quorum.QuorumPeerMain。 在这个类的main方法里进入了zookeeper的启动过程,首先我们会解析配置文件,即zoo.cfg和myid。 3.创建服务器ID 1 [hadoop@vm13data]$echo 1 >myid 1 [hadoop@vm7zookeeper]$echo 2 >/data0/zookeeper/data/myid 1 [hadoop@vm8zookeeper]$echo 3 >/data0/zookeeper/data/myid 4.再启动 zookeep服务,需要三台单独各启动起来 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [hadoop@vm13data]$zkServer.shstart JMXenabledby default Usingconfig:/usr/local/zookeeper/bin/../conf/zoo.cfg Startingzookeeper...STARTED [hadoop@vm13data]$tail/data0/zookeeper/logs/zookeeper.log-n 30 atjava.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java: 182 ) atjava.net.SocksSocketImpl.connect(SocksSocketImpl.java: 392 ) atjava.net.Socket.connect(Socket.java: 579 ) atorg.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java: 354 ) atorg.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java: 327 ) atorg.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java: 393 ) atorg.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java: 365 ) atjava.lang.Thread.run(Thread.java: 724 ) 2016 - 01 - 14 13 : 29 : 27 , 154 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :QuorumPeer@ 738 ]-FOLLOWING 2016 - 01 - 14 13 : 29 : 27 , 177 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Learner@ 85 ]-TCPNoDelay set to: true 2016 - 01 - 14 13 : 29 : 27 , 184 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:zookeeper.version= 3.4 . 5 - 1392090 ,builton 09 / 30 / 2012 17 : 52 GMT 2016 - 01 - 14 13 : 29 : 27 , 184 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:host.name=vm13 2016 - 01 - 14 13 : 29 : 27 , 184 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java.version= 1.7 .0_25 2016 - 01 - 14 13 : 29 : 27 , 184 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java.vendor=OracleCorporation 2016 - 01 - 14 13 : 29 : 27 , 185 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java.home=/usr/local/jdk1. 7 .0_25/jre 2016 - 01 - 14 13 : 29 : 27 , 185 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java. class .path=/usr/local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j-log4j12- 1.6 . 1 .jar:/usr/local/zookeeper/bin/../lib/slf4j-api- 1.6 . 1 .jar:/usr/local/zookeeper/bin/../lib/netty- 3.2 . 2 .Final.jar:/usr/local/zookeeper/bin/../lib/log4j- 1.2 . 15 .jar:/usr/local/zookeeper/bin/../lib/jline- 0.9 . 94 .jar:/usr/local/zookeeper/bin/../zookeeper- 3.4 . 5 .jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin/../conf: 2016 - 01 - 14 13 : 29 : 27 , 185 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016 - 01 - 14 13 : 29 : 27 , 185 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java.io.tmpdir=/tmp 2016 - 01 - 14 13 : 29 : 27 , 186 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:java.compiler=<NA> 2016 - 01 - 14 13 : 29 : 27 , 186 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:os.name=Linux 2016 - 01 - 14 13 : 29 : 27 , 186 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:os.arch=amd64 2016 - 01 - 14 13 : 29 : 27 , 189 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:os.version= 2.6 . 32 - 431 .el6.x86_64 2016 - 01 - 14 13 : 29 : 27 , 189 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:user.name=hadoop 2016 - 01 - 14 13 : 29 : 27 , 190 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:user.home=/home/hadoop 2016 - 01 - 14 13 : 29 : 27 , 190 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Environment@ 100 ]-Serverenvironment:user.dir=/data0/zookeeper/data 2016 - 01 - 14 13 : 29 : 27 , 191 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :ZooKeeperServer@ 162 ]-Createdserver with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir/data0/zookeeper/data/version- 2 snapdir/data0/zookeeper/data/version- 2 2016 - 01 - 14 13 : 29 : 27 , 194 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Follower@ 63 ]-FOLLOWING-LEADERELECTIONTOOK- 283 2016 - 01 - 14 13 : 29 : 27 , 223 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :Learner@ 322 ]-Gettingadifffromtheleader 0x100000002 2016 - 01 - 14 13 : 29 : 27 , 228 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :FileTxnSnapLog@ 240 ]-Snapshotting: 0x100000002 to/data0/zookeeper/data/version- 2 /snapshot. 100000002 2016 - 01 - 14 13 : 29 : 27 , 230 [myid: 0 ]-INFO[QuorumPeer[myid= 0 ]/ 0 : 0 : 0 : 0 : 0 : 0 : 0 : 0 : 12181 :FileTxnSnapLog@ 240 ]-Snapshotting: 0x100000002 to/data0/zookeeper/data/version- 2 /snapshot. 100000002 5、验证 使用nc(telnet也可以)发送ruok命令(Are you OK?)到监听端口,检查zookeeper是否正在运行 1 2 imok[hadoop@vm13hbase]$echo "ruok" |nclocalhost 2181 imok 进程验证 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [hadoop@vm13data]$jps 9134 Jps 9110 QuorumPeerMain [hadoop@vm7zookeeper]$jps 5312 Jps 5287 QuorumPeerMain [hadoop@vm8zookeeper]$jps 4797 Jps 4732 QuorumPeerMain 6.启动hbase,查看日志 1 2 3 4 [hadoop@vm13data]$start-hbase.shstart startingmaster,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-master-vm13.out vm7:startingregionserver,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-regionserver-vm7.out vm8:startingregionserver,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-regionserver-vm8.out 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 [hadoop@vm13hbase- 0.94 . 16 ]$tail/usr/local/hbase- 0.94 . 16 /logs/hbase-hadoop-master-vm13.log-n 30 atorg.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java: 152 ) atorg.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java: 104 ) atorg.apache.hadoop.util.ToolRunner.run(ToolRunner.java: 65 ) atorg.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java: 76 ) atorg.apache.hadoop.hbase.master.HMaster.main(HMaster.java: 2120 ) Causedby:org.apache.zookeeper.KeeperException$ConnectionLossException:KeeperErrorCode=ConnectionLoss for /hbase atorg.apache.zookeeper.KeeperException.create(KeeperException.java: 99 ) atorg.apache.zookeeper.KeeperException.create(KeeperException.java: 51 ) atorg.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java: 1041 ) atorg.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java: 1069 ) atorg.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java: 199 ) atorg.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java: 1109 ) atorg.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java: 1099 ) atorg.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java: 1083 ) atorg.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java: 162 ) atorg.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java: 155 ) atorg.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 347 ) atsun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod) atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java: 57 ) atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java: 45 ) atjava.lang.reflect.Constructor.newInstance(Constructor.java: 526 ) atorg.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java: 2101 ) ... 5 more 2016 - 01 - 14 13 : 32 : 33 , 200 INFOorg.apache.zookeeper.ClientCnxn:Openingsocketconnectiontoservervm8/ 10.1 . 2.198 : 2181 .WillnotattempttoauthenticateusingSASL(unknownerror) 2016 - 01 - 14 13 : 32 : 33 , 201 WARNorg.apache.zookeeper.ClientCnxn:Session 0x0 for server null ,unexpectederror,closingsocketconnectionandattemptingreconnect java.net.ConnectException:Connectionrefused atsun.nio.ch.SocketChannelImpl.checkConnect(NativeMethod) atsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java: 708 ) atorg.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java: 350 ) atorg.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java: 1068 ) 从日志 来看:org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase。这是由于hbase自带zookeeper管理具,在hadoop上创建/hbase是目录,所以回到hadoop集群下mastser查看 1 2 3 4 [root@masterhadoop1. 0 ]#hadoopfs-ls/ Found 2 items drwxr-xr-x-hadoopsupergroup 0 2016 - 01 - 11 11 : 40 /hadoop drwxr-xr-x-hadoopsupergroup 0 2016 - 01 - 14 10 : 42 /hbase 删除/hbase 1 2 [hadoop@masterhadoop1. 0 ]$hadoopfs-rmr/hbase Movedtotrash:hdfs: //master:9000/hbase 另外如果zookeeper定义不是默认的2181端口,比如是12181端口,那么还要在hbase-site.sh下需要添加 1 2 3 4 5 6 7 8 <property> <name>hbase.zookeeper.property.clientPort</name> <value> 12181 </value> <description> PropertyfromZooKeeper'sconfigzoo.cfg. Theportatwhichtheclientswillconnect. </description> </property> 再启动 1 2 3 4 [hadoop@vm13conf]$start-hbase.sh startingmaster,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-master-vm13.out vm8:startingregionserver,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-regionserver-vm8.out vm7:startingregionserver,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-regionserver-vm7.out 检查 1 2 3 4 hadoop@vm13conf]$start-hbase.sh startingmaster,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-master-vm13.out vm8:startingregionserver,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-regionserver-vm8.out vm7:startingregionserver,loggingto/usr/local/hbase- 0.94 . 16 //logs/hbase-hadoop-regionserver-vm7.out 日志 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [hadoop@vm13conf]$tail../logs/hbase-hadoop-master-vm13.log-n 50 2016 - 01 - 14 16 : 34 : 19 , 901 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 19 , 904 INFOorg.apache.hadoop.hbase.catalog.CatalogTracker:Failedverificationof.META.,, 1 ataddress=vm7, 60020 , 1452760130913 ;org.apache.hadoop.hbase.NotServingRegionException:org.apache.hadoop.hbase.NotServingRegionException:Region is notonline:.META.,, 1 2016 - 01 - 14 16 : 34 : 19 , 957 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 19 , 958 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 19 , 967 INFOorg.apache.hadoop.hbase.catalog.CatalogTracker:Failedverificationof.META.,, 1 ataddress=vm7, 60020 , 1452760130913 ;org.apache.hadoop.hbase.NotServingRegionException:org.apache.hadoop.hbase.NotServingRegionException:Region is notonline:.META.,, 1 2016 - 01 - 14 16 : 34 : 19 , 973 DEBUGorg.apache.hadoop.hbase.master.AssignmentManager:Handlingtransition=RS_ZK_REGION_OPENING,server=vm8, 60020 , 1452760452737 ,region= 1028785192 /.META. 2016 - 01 - 14 16 : 34 : 20 , 019 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 20 , 021 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 20 , 025 INFOorg.apache.hadoop.hbase.catalog.CatalogTracker:Failedverificationof.META.,, 1 ataddress=vm7, 60020 , 1452760130913 ;org.apache.hadoop.hbase.NotServingRegionException:org.apache.hadoop.hbase.NotServingRegionException:Region is notonline:.META.,, 1 2016 - 01 - 14 16 : 34 : 20 , 030 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 20 , 032 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 20 , 038 INFOorg.apache.hadoop.hbase.catalog.CatalogTracker:Failedverificationof.META.,, 1 ataddress=vm7, 60020 , 1452760130913 ;org.apache.hadoop.hbase.NotServingRegionException:org.apache.hadoop.hbase.NotServingRegionException:Region is notonline:.META.,, 1 2016 - 01 - 14 16 : 34 : 20 , 093 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34 : 20 , 094 DEBUGorg.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:Lookeduprootregionlocation,connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@4c627d22;serverName=vm7, 60020 , 1452760452808 2016 - 01 - 14 16 : 34

优秀的个人博客,低调大师

cloudera 部署Hadoop 更方便

注意:以下操作对应 rhel5/centos5 1 获取cdh3 yum 源 wget-chttp://archive.cloudera.com/redhat/cdh/cdh3-repository-1.0-1.noarch.rpm 2 安装下载的rpm 包 yum--nogpgchecklocalinstallcdh3-repository-1.0-1.noarch.rpm //安装后将得到cloudera-cdh3.repo 文件 [root@namenode~]#ll/etc/yum.repos.d/ total40 -rw-r--r--1rootroot1926Aug292011CentOS-Base.repo -rw-r--r--1rootroot631Aug292011CentOS-Debuginfo.repo -rw-r--r--1rootroot626Aug292011CentOS-Media.repo -rw-r--r--1rootroot5390Aug292011CentOS-Vault.repo -rw-r--r--1rootroot201Jul142011cloudera-cdh3.repo 3 导入 rpm key rpm--importhttp://archive.cloudera.com/redhat/cdh/RPM-GPG-KEY-cloudera 4 安装 hadoop 主程序 yuminstallhadoop-0.20 5 hadoop <daemon type> namenode datanode secondarynamenode jobtracker tasktracker //比如安装 namenode 为: yum install hadoop-0.20-datanode,不同角色安装不同服务 6 安装后 hadoop 目录 //hadoop配置文件目录 [root@namenode~]#ll/etc/hadoop/ total8 lrwxrwxrwx1rootroot34Feb1702:56conf->/etc/alternatives/hadoop-0.20-conf drwxr-xr-x2rootroot4096Feb2810:13conf.empty drwxr-xr-x2rootroot4096Feb2810:15conf.pseudo //hadoop日志目录 [root@namenode~]#ll/var/log/hadoop lrwxrwxrwx1rootroot28Feb1702:56/var/log/hadoop->/etc/alternatives/hadoop-log //hadoop进程脚本 [root@namenode~]#ll/etc/init.d/|grephadoop -rwxr-xr-x1rootroot3041Feb1702:26hadoop-0.20-datanode -rwxr-xr-x1rootroot3067Feb1702:26hadoop-0.20-jobtracker -rwxr-xr-x1rootroot3041Feb1702:26hadoop-0.20-namenode -rwxr-xr-x1rootroot3158Feb1702:26hadoop-0.20-secondarynamenode -rwxr-xr-x1rootroot3080Feb1702:26hadoop-0.20-tasktracker 7 修改配置文档 (hdfs 方面) //slaves配置文件namenode上配置即可 cat/etc/hadoop/conf/slaves datanode1 datanode2 //hdfs-site.xml配置文件 cat/etc/hadoop/conf/hdfs-site.xml <?xmlversion="1.0"?> <?xml-stylesheettype="text/xsl"href="configuration.xsl"?> <!--Putsite-specificpropertyoverridesinthisfile.--> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <!--ImmediatelyexitsafemodeassoonasoneDataNodechecksin. Onamulti-nodecluster,theseconfigurationsmustberemoved.--> <property> <name>dfs.safemode.extension</name> <value>0</value> </property> <property> <name>dfs.safemode.min.datanodes</name> <value>1</value> </property> <!-- <property> specifythissothatrunning'hadoopnamenode-format'formatstherightdir <name>dfs.name.dir</name> <value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value> </property> --> <!--addbydongnan--> <property> <name>dfs.data.dir</name> <value>/data/dfs/data</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/data/dfs/tmp</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>200000</value> </property> </configuration> //core-site.xml配置文件 cat/etc/hadoop/conf/core-site.xml <?xmlversion="1.0"?> <?xml-stylesheettype="text/xsl"href="configuration.xsl"?> <!--Putsite-specificpropertyoverridesinthisfile.--> <configuration> <property> <name>fs.default.name</name> <value>hdfs://namenode:8020</value> </property> </configuration> 8 java 环境 //下载安装jdk chmod+xjdk-6u26-linux-x64-rpm.bin ./jdk-6u26-linux-x64-rpm.bin //编辑profile vim/etc/profile exportJAVA_HOME=/usr/java/jdk1.6.0_26 exportPATH=$JAVA_HOME/bin:$PATH //载入环境变量 souce /etc/profile 9 启动hadoop 相应进程 [root@namenode~]#/etc/init.d/hadoop-0.20-namenodestart [root@namenode~]#jps 5599NameNode 12889Jps 本文转自 dongnan 51CTO博客,原文链接: http://blog.51cto.com/dngood/791719

优秀的个人博客,低调大师

Elasticsearch-1.7.0安装部署

1、首先去elasticsearch官网下载软件包版本1.7.0版本. 1 #wgethttps://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.0.tar.gz 2、解压elasticsearch-1.7.0.tar.gz 软件包. 1 #tarzxfelasticsearch-1.7.0.tar.gz 3、es配置文件参数解释(真正配置不全用的到): #集群名称标识了你的集群,自动探查会用到它。 #如果你在同一个网络中运行多个集群,那就要确保你的集群名称是独一无二的. 1 cluster.name: test -elasticsearch #节点名称会在启动的时候自动生成,所以你可以不用手动配置。你也可以给节点指定一个特定的名称. 1 node.name: "elsearch1" #允许这个节点被选举为一个主节点(默认为允许) 1 #node.master:true #允许这个节点存储数据(默认为允许) 1 #node.data:true #You can exploit these settings to design advanced cluster topologies. # 你可以利用这些设置设计高级的集群拓扑 # # 1. You want this node to never become a master node, only to hold data. # This will be the "workhorse" of your cluster. # 1. 你不想让这个节点成为一个主节点,只想用来存储数据。 # 这个节点会成为你的集群的“负载器” # 1 2 #node.master:false #node.data:true #You want this node to only serve as a master: to not store any data and # to have free resources. This will be the "coordinator" of your cluster. # 2. 你想让这个节点成为一个主节点,并且不用来存储任何数据,并且拥有空闲资源。 # 这个节点会成为你集群中的“协调器” 1 2 #node.master:true #node.data:false # Use the Cluster Health API [http://localhost:9200/_cluster/health], the # Node Info API [http://localhost:9200/_nodes] or GUI tools #使用集群体检API[http://localhost:9200/_cluster/health] , # 节点信息API[http://localhost:9200/_cluster/nodes] 或者GUI工具例如: # A node can have generic attributes associated with it, which can later be used # for customized shard allocation filtering, or allocation awareness. An attribute # is a simple key value pair, similar to node.key: value, here is an example: # 一个节点可以附带一些普通的属性,这些属性可以在后面的自定义分片分配过滤或者allocation awareness中使用。 # 一个属性就是一个简单的键值对,类似于node.key: value, 这里有一个例子: 1 #node.rack:rack314 # By default, multiple nodes are allowed to start from the same installation location # to disable it, set the following: # 默认的,多个节点允许从同一个安装位置启动。若想禁止这个特性,按照下面所示配置: 1 #node.max_local_storage_nodes:1 # Set the number of shards (splits) of an index (5 by default): # 设置一个索引的分片数量(默认为5) 1 #index.number_of_shards:5 # Set the number of replicas (additional copies) of an index (1 by default): # 设置一个索引的副本数量(默认为1) 1 #index.number_of_replicas:1 # Note, that for development on a local machine, with small indices, it usually # makes sense to "disable" the distributed features: # 注意,为了使用小的索引在本地机器上开发,禁用分布式特性是合理的做法。 1 2 #index.number_of_shards:1 #index.number_of_replicas:0 # Path to directory containing configuration (this file and logging.yml): # 包含配置(这个文件和logging.yml)的目录的路径 1 #path.conf:/path/to/conf # Path to directory where to store index data allocated for this node. # 存储这个节点的索引数据的目录的路径 1 #path.data:/path/to/data # Can optionally include more than one location, causing data to be striped across # the locations (a la RAID 0) on a file level, favouring locations with most free # space on creation. For example: # 可以随意的包含不止一个位置,这样数据会在文件层跨越多个位置(a la RAID 0),创建时会 # 优先选择大的剩余空间的位置 1 #path.data:/path/to/data1,/path/to/data2 # Path to temporary files: # 临时文件的路径 1 #path.work:/path/to/work # Path to log files: # 日志文件的路径 1 #path.logs:/path/to/logs # Path to where plugins are installed: # 插件安装路径 1 #path.plugins:/path/to/plugins # If a plugin listed here is not installed for current node, the node will not start. # 如果当前结点没有安装下面列出的插件,结点不会启动 1 #plugin.mandatory:mapper-attachments,lang-groovy # ElasticSearch performs poorly when JVM starts swapping: you should ensure that # it _never_ swaps. # 当JVM开始swapping(换页)时ElasticSearch性能会低下,你应该保证它不会换页 # Set this property to true to lock the memory: # 设置这个属性为true来锁定内存 1 #bootstrap.mlockall:true # Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set # to the same value, and that the machine has enough memory to allocate # for ElasticSearch, leaving enough memory for the operating system itself. # 确保ES_MIN_MEM和ES_MAX_MEM环境变量设置成了同一个值,确保机器有足够的内存来分配 # 给ElasticSearch,并且保留足够的内存给操作系统 # You should also make sure that the ElasticSearch process is allowed to lock # the memory, eg. by using `ulimit -l unlimited`. # 你应该确保ElasticSearch的进程可以锁定内存,例如:使用`ulimit -l unlimited` # ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens # on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node # communication. (the range means that if the port is busy, it will automatically # try the next port). # 默认的ElasticSearch把自己和0.0.0.0地址绑定,HTTP传输的监听端口在[9200-9300],节点之间 # 通信的端口在[9300-9400]。(范围的意思是说如果一个端口已经被占用,它将会自动尝试下一个端口) # Set the bind address specifically (IPv4 or IPv6): # 设置一个特定的绑定地址(IPv4 or IPv6): 1 #network.bind_host:192.168.0.1 # Set the address other nodes will use to communicate with this node. If not # set, it is automatically derived. It must point to an actual IP address. # 设置其他节点用来与这个节点通信的地址。如果没有设定,会自动获取。 # 必须是一个真实的IP地址。 1 #network.publish_host:192.168.0.1 # Set both 'bind_host' and 'publish_host': # 'bind_host'和'publish_host'都设置 1 #network.host:192.168.0.1 # Set a custom port for the node to node communication (9300 by default): # 为节点之间的通信设置一个自定义端口(默认为9300) 1 #transport.tcp.port:9300 # Enable compression for all communication between nodes (disabled by default): # 为所有的节点间的通信启用压缩(默认为禁用) 1 #transport.tcp.compress:true # Set a custom port to listen for HTTP traffic: # 设置一个监听HTTP传输的自定义端口 1 #http.port:9200 # Set a custom allowed content length: # 设置一个自定义的允许的内容长度 1 #http.max_content_length:100mb # Disable HTTP completely: # 完全禁用HTTP 1 #http.enabled:false 3、操作系统配置 1.文件描述符 1 2 3 vim /etc/security/limits .conf添加 *softnofile655350 *hardnofile655350 退出当前用户重新login就会生效,使用ulimit -n验证下。 2.最大内存映射区数量,禁用swap交换分区 1 2 3 4 5 vim /etc/sysctl .conf增加 vm.max_map_count=262144 vm.swappiness=1 修改完成后sysctl-p jvm参数配置 ES_HOME的bin目录下有一个elasticsearch.in.sh文件,修改 1 2 ES_MIN_MEM=256m ES_MAX_MEM=1g 为合适的值 4、es的插件安装: Marvel是Elasticsearch的管理和监控工具,对于开发使用免费的。它配备了一个叫做Sense的交互式控制台,方便通过浏览器直接与Elasticsearch交互。 Marvel是一个插件,在Elasticsearch目录中运行以下代码来下载和安装: 1 . /plugin -ielasticsearch /marvel/latest elasticsearch-head是一个elasticsearch的集群管理工具,它是完全由html5编写的独立网页程序,你可以通过插件把它集成到es。 1 #./plugin-installmobz/elasticsearch-head 地址:http://172.16.2.24:25556/_plugin/head/ elasticsearch插件bigdesk安装: bigdesk是elasticsearch的一个集群监控工具,可以通过它来查看es集群的各种状态,如:cpu、内存使用情况,索引数据、搜索情况,http连接数等。 在cmd命令行中进入安装目录,再进入 bin目录,运行以下命令: 1 #./plugin-installlukas-vlcek/bigdesk 在浏览器中输入:http://172.16.2.24:25556/_plugin/bigdesk可以看到效果 注意:elasticsearch 分词ik的安装,如果不安装分词ik插件,根本建不了索引,并且让访问http://172.16.2.24:25556/_plugin/head/ 集群一片空白,点击web 创建索引页没有反应。 注意:github https://github.com/medcl/elasticsearch-analysis-ik 给出了对应的es的ik版本,1.7.0的es对应的1.2.6的版本,开始我这块装了1.8的ik,创建索引失败,后台也是报ik的错误。 ik:1.2.6版本的下载:https://github.com/medcl/elasticsearch-analysis-ik/releases?after=v1.6.1 安装操作: 下载zip包解压到一个目录解压缩: 1 #unzipelasticsearch-analysis-ik-master.zip 安装mavne环境,apache 官网下载软件包设置环境变量: 1 #exportPATH=$PATH:/usr/local/maven/bin 因为是源代码,此处需要使用maven打包,进入解压文件夹中,执行命令: 1 2 #cdelasticsearch-analysis-ik-master #mvncleanpackage #在es的plugin目录下创建ik目录,并将target目录下的elasticsearch-analysis-ik-1.2.6.jar copy 到ik目录下。 1 2 3 4 5 6 7 8 9 [root@localhosttarget] #cd/data/elasticsearch-1.7.0 [root@localhostelasticsearch-1.7.0] #ls binconfigdatalibLICENSE.txtlogsNOTICE.txtpluginsREADME.textile [root@localhostelasticsearch-1.7.0] #cdplugins/ [root@localhostplugins] #ls bigdesk head ikmarvel [root@localhostplugins] #cdik/ [root@localhostik] #ls elasticsearch-analysis-ik-1.2.6.jar 注意:如果是集群,可以将jar分别copy至其他几台机器。 es配置文件需要添加入下行: 1 2 3 4 5 6 7 8 9 10 11 12 13 index: analysis: analyzer: ik: alias :[ik_analyzer] type :org.elasticsearch.index.analysis.IkAnalyzerProvider ik_max_word: type :ik use_smart: false ik_smart: type :ik use_smart: true marvel.agent.enabled: false 完整的es配置文件如下,三台同样的配置,除了hostip和node.name外. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 #catelasticsearch.yml cluster.name: test -es-cluster network.host:172.16.2.24 node.name: "node24" discovery.zen. ping .unicast.hosts:[ "172.16.2.24:25555" , "172.16.2.21:25555" , "172.16.2.23:25555" ] index.number_of_shards:5 discovery.zen.minimum_master_nodes:2 script.groovy.sandbox.enabled: false transport.tcp.port:25555 http.port:25556 script.inline:off script.indexed:off script. file :off index: analysis: analyzer: ik: alias :[ik_analyzer] type :org.elasticsearch.index.analysis.IkAnalyzerProvider ik_max_word: type :ik use_smart: false ik_smart: type :ik use_smart: true marvel.agent.enabled: false 后台启动es服务: 1 2 3 [root@localhostbin] #pwd /data/elasticsearch-1 .7.0 /bin [root@localhostbin] #./elasticsearch-d 三台集群的机器中找其中一台创建索引: 创建索引: 1 2 curl-XPUT 'http://172.16.2.24:25556/index' { "acknowledged" : true } 注意:返回结果为acknowledged":true 为成功. 通过浏览器访问:http://172.16.2.24:25556/_plugin/head/测试效果. 本文转自青衫解衣 51CTO博客,原文链接:http://blog.51cto.com/215687833/1927264

资源下载

更多资源
腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册