首页 文章 精选 留言 我的

精选列表

搜索[搭建],共10000篇文章
优秀的个人博客,低调大师

搭建Eclipse和MyEclipse的开发环境

主要步骤: 下载并配置Eclipse 建立并运行一个简单的javaSE项目 下载并破解MyEclipse 整合Eclipse和MyEclipse 开发环境和Tomcat结合 关于这个配置也可以参考:https://www.cnblogs.com/kangjianwei101/p/5621730.html,写的不错。 下载并配置Eclipse 1、官网下载:https://www.eclipse.org/downloads/ 这里我用自己的网盘:链接: https://pan.baidu.com/s/1boZmhFaBrlO55tkjEkQN-Q 密码: v723 2、解压后,双击eclipse.exe文件安装 3、设定工作空间:E:\java\code\EclipseEEWorkspace(目录自己新建) 建立并运行一个简单的javaSE项目 1、新建一个Project工程 2、在src中新建一个Class类 3、在main方法中写:System.out.println("say hello world."); 然后点击运行按钮 4、在console窗口中看到:say hello world. 下载并破解MyEclipse MyEclipse,是在eclipse 基础上加上自己的插件开发而成的功能强大的企业级集成开发环境,主要用于Java、Java EE以及移动应用的开发。MyEclipse的功能非常强大,支持也十分广泛,尤其是对各种开源产品的支持相当不错。 1、下载MyEclipse 可以到官网下载:http://www.myeclipsecn.com/ 我用自己的网盘:链接:https://pan.baidu.com/s/1UOEZPhyzsl0dlc8gMzrgxA 密码:vsc8,这是2015版,包括了破解工具 2、安装并破解MyEclipse 一路确定,就是系统32位还是64位,注意一下,最后一步的时候注意,不要启动。 3、注册MyEclipse 解压注册机,并运行myeclipse2015_keygen文件夹的crack.bat 写用户名》选blue》点击SystemId》点击Active 现在可以打开MyEclipse

优秀的个人博客,低调大师

CentOS7.x Hadoop集群搭建

1. 准备工作 我有一个主机ip是192.168.27.166,我将再此基础上再扩展三个主机。 修改主机名 /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.27.166 s166 192.168.27.167 s167 192.168.27.168 s168 192.168.27.169 s169 其中,我将把s166作为名称节点(NameNode),其它三个作为数据节点(DataNode)。 修改hostname etc/hostname 2. 克隆虚拟机 略 3. 修改数据节点的hostname和ip地址 编辑/etc/sysconfig/network-scripts/ifcfg-eno33 编辑/etc/hostname 重启网络服务 service network restart 验证通信是否完成 [root@s166 etc]# ping s167 PING s167 (192.168.27.167) 56(84) bytes of data. 64 bytes from s167 (192.168.27.167): icmp_seq=1 ttl=64 time=0.245 ms [root@s166 etc]# ping s168 PING s168 (192.168.27.168) 56(84) bytes of data. 64 bytes from s168 (192.168.27.168): icmp_seq=1 ttl=64 time=0.203 ms [root@s166 etc]# ping s169 PING s169 (192.168.27.169) 56(84) bytes of data. 64 bytes from s169 (192.168.27.169): icmp_seq=1 ttl=64 time=0.178 ms 4. 各主机间无密码通信 SSH无密码登录设置 将s166的公钥文件id_rsa.pub远程复制到167-169主机上 测试是否配置成功 [root@s166 etc]# ssh s167 Last failed login: Wed Jul 25 15:50:22 EDT 2018 on tty1 There was 1 failed login attempt since the last successful login. Last login: Wed Jul 25 10:36:57 2018 from s166 [root@s167 ~]# ... Hadoop的4+1个文件配置 hadoop的配置文件都在/hadoop/etc/hadoop下 1. core-site.xml 设置hdfs://s166为服务器ip地址 <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://s166/</value> </property> </configuration> 2. hdfs-site.xml 分片数量设置为3 <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration> 3. mapred-site.xml.template 先mapred-site.xml.template mapred-site.xml,然后对mapred-site.xml做修改执行框架设置为Hadoop YARN <?xml version="1.0"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 4. yarn-site.xml RM的hostname设置为s166 <?xml version="1.0"?> <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>s166</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 5. slaves s167 s168 s169 修改etc/hadoop/hadoop-env.sh配置 把该文件中的export JAVA_HOME属性修改成JAVA_HOME的绝对路径。 export JAVA_HOME=/home/fantj/jdk 配置分发 进入到hadoop/etc目录下,把该目录下的hadoop覆盖到各个主机。 scp -r hadoop/ root@s167:/home/fantj/hadoop/etc/ scp -r hadoop/ root@s168:/home/fantj/hadoop/etc/ scp -r hadoop/ root@s169:/home/fantj/hadoop/etc/ 格式化文件系统 hadoop namenode -format [root@s166 etc]# hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 18/07/27 04:17:36 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = s166/192.168.27.166 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.0 STARTUP_MSG: classpath = /home/fantj/download/hadoop-2.7.0/etc/hadoop:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/asm-3.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/hadoop-auth-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/hadoop-annotations-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/gson-2.2.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/activation-1.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jettison-1.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/xz-1.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/junit-4.11.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/hadoop-common-2.7.0-tests.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/hadoop-common-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/common/hadoop-nfs-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/hadoop-hdfs-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/hdfs/hadoop-hdfs-2.7.0-tests.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/activation-1.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jettison-1.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-common-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-client-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-registry-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-common-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-api-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.0-tests.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.0.jar:/home/fantj/download/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar:/home/fantj/hadoop/contrib/capacity-scheduler/*.jar:/home/fantj/download/hadoop-2.7.0/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = Unknown -r Unknown; compiled by 'root' on 2015-05-21T03:49Z STARTUP_MSG: java = 1.8.0_171 ************************************************************/ 18/07/27 04:17:36 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 18/07/27 04:17:36 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-75fa7946-8fa3-4b09-83c4-00ec91f5f0b6 18/07/27 04:17:39 INFO namenode.FSNamesystem: No KeyProvider found. 18/07/27 04:17:39 INFO namenode.FSNamesystem: fsLock is fair:true 18/07/27 04:17:39 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 18/07/27 04:17:39 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 18/07/27 04:17:39 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 18/07/27 04:17:39 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 27 04:17:39 18/07/27 04:17:39 INFO util.GSet: Computing capacity for map BlocksMap 18/07/27 04:17:39 INFO util.GSet: VM type = 64-bit 18/07/27 04:17:39 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 18/07/27 04:17:39 INFO util.GSet: capacity = 2^21 = 2097152 entries 18/07/27 04:17:39 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 18/07/27 04:17:39 INFO blockmanagement.BlockManager: defaultReplication = 3 18/07/27 04:17:39 INFO blockmanagement.BlockManager: maxReplication = 512 18/07/27 04:17:39 INFO blockmanagement.BlockManager: minReplication = 1 18/07/27 04:17:39 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 18/07/27 04:17:39 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 18/07/27 04:17:39 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 18/07/27 04:17:39 INFO blockmanagement.BlockManager: encryptDataTransfer = false 18/07/27 04:17:39 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 18/07/27 04:17:39 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 18/07/27 04:17:39 INFO namenode.FSNamesystem: supergroup = supergroup 18/07/27 04:17:39 INFO namenode.FSNamesystem: isPermissionEnabled = true 18/07/27 04:17:39 INFO namenode.FSNamesystem: HA Enabled: false 18/07/27 04:17:39 INFO namenode.FSNamesystem: Append Enabled: true 18/07/27 04:17:40 INFO util.GSet: Computing capacity for map INodeMap 18/07/27 04:17:40 INFO util.GSet: VM type = 64-bit 18/07/27 04:17:40 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 18/07/27 04:17:40 INFO util.GSet: capacity = 2^20 = 1048576 entries 18/07/27 04:17:40 INFO namenode.FSDirectory: ACLs enabled? false 18/07/27 04:17:40 INFO namenode.FSDirectory: XAttrs enabled? true 18/07/27 04:17:40 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 18/07/27 04:17:40 INFO namenode.NameNode: Caching file names occuring more than 10 times 18/07/27 04:17:40 INFO util.GSet: Computing capacity for map cachedBlocks 18/07/27 04:17:40 INFO util.GSet: VM type = 64-bit 18/07/27 04:17:40 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 18/07/27 04:17:40 INFO util.GSet: capacity = 2^18 = 262144 entries 18/07/27 04:17:40 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 18/07/27 04:17:40 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 18/07/27 04:17:40 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 18/07/27 04:17:40 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 18/07/27 04:17:40 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 18/07/27 04:17:40 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 18/07/27 04:17:40 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 18/07/27 04:17:40 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 18/07/27 04:17:40 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/07/27 04:17:40 INFO util.GSet: VM type = 64-bit 18/07/27 04:17:40 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 18/07/27 04:17:40 INFO util.GSet: capacity = 2^15 = 32768 entries 18/07/27 04:17:40 INFO namenode.FSImage: Allocated new BlockPoolId: BP-703568763-192.168.27.166-1532679460786 18/07/27 04:17:40 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted. 18/07/27 04:17:41 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 18/07/27 04:17:41 INFO util.ExitUtil: Exiting with status 0 18/07/27 04:17:41 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at s166/192.168.27.166 ************************************************************/ 启动hadoop进程 start-all.sh [root@s166 etc]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [s166] The authenticity of host 's166 (192.168.27.166)' can't be established. ECDSA key fingerprint is SHA256:3BePLPmiwUOy025LcIJJNvQJlKUJ3uo9T03op0XC5ws. ECDSA key fingerprint is MD5:24:ee:b2:a8:cf:df:9d:f7:cc:6c:1f:73:c5:ad:b5:b0. Are you sure you want to continue connecting (yes/no)? yes s166: Warning: Permanently added 's166,192.168.27.166' (ECDSA) to the list of known hosts. root@s166's password: s166: starting namenode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-namenode-s166.out s167: starting datanode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-datanode-s167.out s169: starting datanode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-datanode-s169.out s168: starting datanode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-datanode-s168.out Starting secondary namenodes [0.0.0.0] The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. ECDSA key fingerprint is SHA256:3BePLPmiwUOy025LcIJJNvQJlKUJ3uo9T03op0XC5ws. ECDSA key fingerprint is MD5:24:ee:b2:a8:cf:df:9d:f7:cc:6c:1f:73:c5:ad:b5:b0. Are you sure you want to continue connecting (yes/no)? yes 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts. root@0.0.0.0's password: 0.0.0.0: starting secondarynamenode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-secondarynamenode-s166.out starting yarn daemons starting resourcemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-resourcemanager-s166.out s168: starting nodemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-nodemanager-s168.out s167: starting nodemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-nodemanager-s167.out s169: starting nodemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-nodemanager-s169.out 配置成功校验 根据控制台打印信息查看 s166: starting namenode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-namenode-s166.out s167: starting datanode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-datanode-s167.out s169: starting datanode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-datanode-s169.out s168: starting datanode, logging to /home/fantj/download/hadoop-2.7.0/logs/hadoop-root-datanode-s168.out starting resourcemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-resourcemanager-s166.out s168: starting nodemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-nodemanager-s168.out s167: starting nodemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-nodemanager-s167.out s169: starting nodemanager, logging to /home/fantj/download/hadoop-2.7.0/logs/yarn-root-nodemanager-s169.out 有报错的话,去查看日志分析。 NameNode主机校验 [root@s166 etc]# jps 1397 NameNode 1559 SecondaryNameNode 1815 Jps 1727 ResourceManager DataNode主机校验 [root@s168 ~]# jps 11281 Jps 1815 NodeManager 1756 DataNode

优秀的个人博客,低调大师

CentOS搭建个人服务脱坑实录

环境:VMware、Centos7 64位 说明:本是个人的爬坑经历所总结出来的,记录一下心得。也给有需要的人提供一些帮助。引用了一些大神的笔记,非常感谢,希望大神们不要介意。 Linux如何查看端口 1 //查看某一端口的占用情况,比如查看8000端口使用情况 2 lsof -i:8000 3 //查看指定的端口号的进程情况,如查看8000端口的情况 4 netstat -tunlp|grep 8000 5 netstat -anp|grep 8000 6 //进一步查看被那个程序占用,或直接用 ps -aux | grep pid 查看 7 ps -aux | grep java 1 netstat -ntlp //查看当前所有tcp端口· 2 netstat -ntulp |grep 80 //查看所有80端口使用情况· 3 netstat -an | grep 3306 //查看所有3306端口使用情况· 1 //查看某个服务占用的进程 2 ps -ef | grep tomcat 3 //查看本机所有java进程 4 jps Centos 7 防火墙命令 1 systemctl start/stop/restart firewalld.service //启动/关闭/重启防火墙 2 systemctl status firewalld.service //显示防火墙的状态 3 systemctl enable/disable firewalld.service //开机时启动/禁用防火墙 4 systemctl is-enable firewalld.service //查看防火墙是否开机启动 5 systemctl list-unit-files|grep enabled //查看已启动的服务列表 1 //查看防火墙的状态 2 1. firewall-cmd --state 3 //查看已经打开的端口 4 2. firewall-cmd --list-ports 5 //开启端口 --permanent:永久生效 6 3. firewall-cmd --permanent --zone=public --add-port=8080/tcp 7 //重启防火墙 8 4. firewall-cmd --reload Centos 7以下版本 防火墙命令 1 1. service iptables status //查看防火墙的状态 2 2. service iptables stop //临时关闭防火墙 3 3. chkcofig iptables off //永久关闭防火墙 安装jdk 1 mkdir silence //根目录下新建文件夹 2 tar -zxvf jdk-7u55-linux-i586.tar.gz //解压: 3 ll //查看当前文件夹内容 4 find / -name profile //查找profile文件 5 cat /etc/profile 6 vim /etc/profile 7 //配置JAVA_HOME 8 //i进入编辑模式,在最后面追加 9 JAVA_HOME=/silence/jdk1.7.0_55/ 10 export PATH=$JAVA_HOME/bin:$PATH 11 :wq 退出编辑 12 source /etc/profile 加载文件使生效 13 java -version //查看版本信息 tomcat的安装 1 同样道理 直接解压就好 不用配置环境变量 2 cd tomcat/bin文件目录下 3 ./startup.sh 启动 4 ./shutdowm.sh 关闭 MySQL安装:参考链接:https://www.cnblogs.com/bigbrotherer/p/7241845.html 安装之后MySQL连接不上:本人用的授权法。参考链接:https://blog.csdn.net/ly_dengle/article/details/77835882 1 //列出所有提供的mysql版本 2 yum list mysql* 文件操作 1 //把test.war文件移动到silence文件夹下 2 mv test.war /silence 3 // 删除操作,不提示,直接删除 4 rm -f filename 解决tomcat不能被外部浏览器访问的问题:参考链接:https://blog.csdn.net/danruoshui315/article/details/76615388 原因: 1. 64位系统中安装了32位程序: 报错信息:/lib/ld-linux.so.2:bad ELF interpreter:没有那个文件或目录。 JDK版本问题 2. 防火墙的存在,导致的端口无法访问。 CentOS7使用firewall而不是iptables。所以解决这类问题可以通过添加firewall的端口,使其对我们需要用的端口开放。 解决: 1. 使用命令 firewall-cmd --state查看防火墙状态。得到结果是running或者not running. 2. 在running 状态下,向firewall 添加需要开放的端口 命令为 firewall-cmd --permanent --zone=public --add-port=8080/tcp //永久的添加该端口。去掉--permanent则表示临时。 3. firewall-cmd --reload //加载配置,使得修改有效. 4. 使用命令 firewall-cmd --permanent --zone=public --list-ports //查看开启的端口,出现8080/tcp这开启正确. 5. 再次使用外部浏览器访问,出现tomcat的欢迎界面。

优秀的个人博客,低调大师

基于CentOS下搭建 WordPress 个人博客

准备 LNMP 环境 任务时间:30min ~ 60min LNMP 是 Linux、Nginx、MySQL 和 PHP 的缩写,是 WordPress 博客系统依赖的基础运行环境。我们先来准备 LNMP 环境 安装 Nginx 使用yum安装 Nginx: yum install nginx -y 修改/etc/nginx/conf.d/default.conf,去除对 IPv6 地址的监听,可参考下面的示例: 示例代码:/etc/nginx/conf.d/default.conf server { listen 80 default_server; # listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } 修改完成后,启动 Nginx: nginx 此时,可访问实验机器外网 HTTP 服务(http://<您的 CVM IP 地址>)来确认是否已经安装成功。 将 Nginx 设置为开机自动启动: chkconfig nginx on CentOS 6 不支持 IPv6,需要取消对 IPv6 地址的监听,否则 Nginx 不能成功启动。 安装 MySQL 使用yum安装 MySQL: yum install mysql-server -y 安装完成后,启动 MySQL 服务: service mysqld restart 设置 MySQL 账户 root 密码: /usr/bin/mysqladmin -u root password 'MyPas$word4Word_Press' 将 MySQL 设置为开机自动启动: chkconfig mysqld on 下面命令中的密码是教程为您自动生成的,为了方便实验的进行,不建议使用其它密码。如果设置其它密码,请把密码记住,在后续的步骤会使用到。 安装 PHP 使用yum安装 PHP: yum install php-fpm php-mysql -y 安装之后,启动 PHP-FPM 进程: service php-fpm start 启动之后,可以使用下面的命令查看 PHP-FPM 进程监听哪个端口 netstat -nlpt | grep php-fpm 把 PHP-FPM 也设置成开机自动启动: chkconfig php-fpm on CentOs 6 默认已经安装了 PHP-FPM 及 PHP-MYSQL,下面命令执行的可能会提示已经安装。 PHP-FPM 默认监听 9000 端口 安装并配置 WordPress 任务时间:30min ~ 60min 安装 WordPress 配置好 LNMP 环境后,继续使用yum来安装 WordPress: yum install wordpress -y 安装完成后,就可以在/usr/share/wordpress看到 WordPress 的源代码了。 配置数据库 进入 MySQL: mysql -uroot --password='MyPas$word4Word_Press' 为 WordPress 创建一个数据库: CREATE DATABASE wordpress; MySQL 部分设置完了,我们退出 MySQL 环境: exit 把上述的 DB 配置同步到 WordPress 的配置文件中,可参考下面的配置: 示例代码:/etc/wordpress/wp-config.php <?php /** * The base configuration for WordPress * * The wp-config.php creation script uses this file during the * installation. You don't have to use the web site, you can * copy this file to "wp-config.php" and fill in the values. * * This file contains the following configurations: * * * MySQL settings * * Secret keys * * Database table prefix * * ABSPATH * * @link https://codex.wordpress.org/Editing_wp-config.php * * @package WordPress */ // ** MySQL settings - You can get this info from your web host ** // /** The name of the database for WordPress */ define('DB_NAME', 'wordpress'); /** MySQL database username */ define('DB_USER', 'root'); /** MySQL database password */ define('DB_PASSWORD', 'MyPas$word4Word_Press'); /** MySQL hostname */ define('DB_HOST', 'localhost'); /** Database Charset to use in creating database tables. */ define('DB_CHARSET', 'utf8'); /** The Database Collate type. Don't change this if in doubt. */ define('DB_COLLATE', ''); /**#@+ * Authentication Unique Keys and Salts. * * Change these to different unique phrases! * You can generate these using the {@link https://api.wordpress.org/secret-key/1.1/salt/ WordPress.org secret-key service} * You can change these at any point in time to invalidate all existing cookies. This will force all users to have to log in again. * * @since 2.6.0 */ define('AUTH_KEY', 'put your unique phrase here'); define('SECURE_AUTH_KEY', 'put your unique phrase here'); define('LOGGED_IN_KEY', 'put your unique phrase here'); define('NONCE_KEY', 'put your unique phrase here'); define('AUTH_SALT', 'put your unique phrase here'); define('SECURE_AUTH_SALT', 'put your unique phrase here'); define('LOGGED_IN_SALT', 'put your unique phrase here'); define('NONCE_SALT', 'put your unique phrase here'); /**#@-*/ /** * WordPress Database Table prefix. * * You can have multiple installations in one database if you give each * a unique prefix. Only numbers, letters, and underscores please! */ $table_prefix = 'wp_'; /** * See http://make.wordpress.org/core/2013/10/25/the-definitive-guide-to-disabling-auto-updates-in-wordpress-3-7 */ /* Disable all file change, as RPM base installation are read-only */ define('DISALLOW_FILE_MODS', true); /* Disable automatic updater, in case you want to allow above FILE_MODS for plugins, themes, ... */ define('AUTOMATIC_UPDATER_DISABLED', true); /* Core update is always disabled, WP_AUTO_UPDATE_CORE value is ignore */ /** * For developers: WordPress debugging mode. * * Change this to true to enable the display of notices during development. * It is strongly recommended that plugin and theme developers use WP_DEBUG * in their development environments. * * For information on other constants that can be used for debugging, * visit the Codex. * * @link https://codex.wordpress.org/Debugging_in_WordPress */ define('WP_DEBUG', false); /* That's all, stop editing! Happy blogging. */ /** Absolute path to the WordPress directory. */ if ( !defined('ABSPATH') ) define('ABSPATH', '/usr/share/wordpress'); /** Sets up WordPress vars and included files. */ require_once(ABSPATH . 'wp-settings.php'); 如果你上面的步骤没有使用教程创建的密码,请修改下面命令中的密码登录 配置 Nginx WordPress 已经安装完毕,我们配置 Nginx 把请求转发给 PHP-FPM 来处理 首先,重命名默认的配置文件: cd /etc/nginx/conf.d/ mv default.conf defaut.conf.bak 在/etc/nginx/conf.d创建 wordpress.conf 配置,参考下面的内容: 示例代码:/etc/nginx/conf.d/wordpress.conf server { listen 80; root /usr/share/wordpress; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } 配置后,通知 Nginx 进程重新加载: nginx -s reload 默认的 Server 监听 80 端口,与 WordPress 的服务端口冲突,将其重命名为 .bak 后缀以禁用默认配置 准备域名和解析 任务时间:15min ~ 30min 域名注册 如果您还没有域名,可以在腾讯云上选购,过程可以参考下面的视频。 视频 - 在腾讯云上购买域名 域名解析 域名购买完成后, 需要将域名解析到实验云主机上,实验云主机的 IP 为: <您的 CVM IP 地址> 在腾讯云购买的域名,可以到控制台添加解析记录,过程可参考下面的视频: 视频 - 如何在腾讯云上解析域名 域名设置解析后需要过一段时间才会生效,通过ping命令检查域名是否生效 ,如: ping www.yourdomain.com 如果 ping 命令返回的信息中含有你设置的解析的 IP 地址,说明解析成功。 注意替换下面命令中的www.yourmpdomain.com为您自己的注册的域名 大功告成! 恭喜,您的 WordPress 博客已经部署完成,您可以通过浏览器访问博客查看效果。 通过IP地址查看: 博客访问地址:http://<您的域名>/wp-admin/install.php

优秀的个人博客,低调大师

linux下搭建服务器环境

一、在linux环境下要运行java程序需要java运行环境 下载地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html 新建目录:mkdir /usr/local/java 解压:tar -zxvf jdk-7u65-linux-i586.tar.gz -C usr/local/java/ 查看系统自带jdk:java -version 卸载自带jdk:rpm-qa | grep jdk;rpm -e -nodeps ... 配置环境变量 vi /etc/profile export JAVA_HOME=/usr/local/java/jdk1.7.0_65 export PATH=$PATH:$JAVA_HOME/bin source /etc/profile java -version 看到java版本信息即配置成功。 二、mysql数据库解压安装 wget https://dev.mysql.com/downloads/file/?id=468980 解压:tar -xvf mysql-5.6.36-linux-glibc2.5-x86_64.tar.gz -C /usr/local/ 重命名文件夹:mv mysql-5.6.36-linux-glibc2.5-x86_64/ /usr/local/mysql mkdir /usr/local/mysql/data/mysql 拷贝服务文件:cp support-files/mysql.server /etc/init.d/mysqld 服务文件授权:chmod 755 /etc/init.d/mysqld 拷贝配置文件:cp support-files/my-default.cnf /etc/my.cnf 修改服务文件:vim /etc/init.d/mysqld 修改项:basedir=/usr/local/mysql/ 修改项:datadir=/usr/local/mysql/data/mysql 启动服务:service mysqld start 设置root用户的密码:update user set password=PASSWORD('root') where user='root'; 使用密码登录:mysql -u root -p 设置远程登录用户:grant select,update,insert,delete on '数据库'.'表' to '名字'@localhost identified by '密码' with grant option ; 三、tomcat安装 解压安装包tar -zxvf apache-tomcat-7.0.67.tar.gz -C /usr/tomcat/ 启动:进入bin目录 ./startup.sh 停止:进入bin目录 ./shutdown.sh

优秀的个人博客,低调大师

Docker学习之搭建MySql容器服务

描述 MySQL 5.6 SQL数据库服务器Docker镜像,此容器映像包含用于OpenShift的MySQL 5.6 SQL数据库服务器和一般用法。用户可以选择RHEL和基于CentOS的图像。然后CentOS镜像可以在Docker Hub上以centos / mysql-56-centos7的形式获得。 用法 查找镜像: docker search mysql 获取镜像: docker pull docker.io/centos/mysql-56-centos7 如果您只想设置必需的环境变量而不将数据库存储在主机目录中,请执行以下命令: docker run -d --name app_mysql -p 3307:3306 -e MYSQL_ROOT_PASSWORD=123456 docker.io/centos/mysql-56-centos7 如果你希望你的数据库在容器执行过程中保持持久性,请执行以下命令: # 创建数据存储目录 和配置文件目录 mkdir -p ~/home/mysql/data ~/home/mysql/cnf.d # 分别赋予读写权限 chmod +766 data/ chmod +766 cnf.d/ # 创建并运行容器 docker run -d --name app_mysql -p 3307:3306 -v /home/mysql/cnf.d:/etc/my.cnf.d -v /home/mysql/data:/var/lib/mysql/data -e MYSQL_ROOT_PASSWORD=123456 docker.io/centos/mysql-56-centos7 命令说明: -p 3307:3306:将容器的3306端口映射到主机的3307端口 -v /home/mysql/cnf.d:/etc/my.cnf.d:主机目录:容器目录 -v /home/mysql/data:/var/lib/mysql/data:主机目录:容器目录 -e MYSQL_ROOT_PASSWORD=123456:初始化root用户的密码 查看容器运行情况: docker ps 进入容器: docker exec -it app_mysql bash 命令说明: -d :分离模式: 在后台运行 -i :即使没有附加也保持STDIN 打开 -t :分配一个伪终端 作者: 小柒 出处: https://blog.52itstyle.com 分享是快乐的,也见证了个人成长历程,文章大多都是工作经验总结以及平时学习积累,基于自身认知不足之处在所难免,也请大家指正,共同进步。

优秀的个人博客,低调大师

谈谈搭建堡垒机的几条原则

【大咖・来了 第7期】10月24日晚8点观看《智能导购对话机器人实践》 总结一下这几年使用堡垒机的经验教训,和大家做一个分享,无论是使用自建堡垒机还是采用一些商用方案,通用的原则是不会变的。希望对大家有所帮助,如果有遗漏的地方,欢迎补充和指教。 原则1:一要建立个人帐号的概念,必须做到一人一个帐号,绝不允许多个人共用个人帐号,更不能允许共同账号登录堡垒机。 原则2:从本机到服务器上每一道防线的安全等级应该是等同的。一定一定不要出现登录跳板机有很强大的管控,但是到了业务服务器上就是人root或者拥有sudo权限类似的情况。 原则3:必须要有操作日志,记录每一条操作或者记录登录到堡垒机后所有的输出。特别是危险的操作,除了直接禁止掉,同时必须要报警出来。 原则4:身份验证,杜绝使用密码登录,建议使用个人token+动态密码的方式。对登录的机器需要做物理验证,身份需要手机动态码验证。 原则5:用户授权,建议结合公司内部CMDB来做到一一对应,不同的岗位对于不同的权限,不建议手动去维护,会出现权限维护不及时。 原则6:网络隔离,堡垒机本身只有公司内网才能访问。进一步的,做到环境隔离,例如,生产环境和测试环境隔离;同时做到业务之间的隔离,不同业务线的机器是不能相互访问。 原则7:高可用,堡垒机本身的高可用需要重点关注,做好定时备份和应急处理,报警机制必须要有,需要运维专人专岗来维护。

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Oracle

Oracle

Oracle Database,又名Oracle RDBMS,或简称Oracle。是甲骨文公司的一款关系数据库管理系统。它是在数据库领域一直处于领先地位的产品。可以说Oracle数据库系统是目前世界上流行的关系数据库管理系统,系统可移植性好、使用方便、功能强,适用于各类大、中、小、微机环境。它是一种高效率、可靠性好的、适应高吞吐量的数据库方案。

Apache Tomcat

Apache Tomcat

Tomcat是Apache 软件基金会(Apache Software Foundation)的Jakarta 项目中的一个核心项目,由Apache、Sun 和其他一些公司及个人共同开发而成。因为Tomcat 技术先进、性能稳定,而且免费,因而深受Java 爱好者的喜爱并得到了部分软件开发商的认可,成为目前比较流行的Web 应用服务器。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。