通过源码的方式编译hadoop的安装文件
Hadoop2.4.0 重新编译 64 位本地库
原创作者:大鹏鸟 时间:2014-07-28
环境:虚拟机 VirtualBox,操作系统 64 位 CentOS 6.4
下载重新编译需要的软件包
apache-ant-1.9.4-bin.tar.gz
findbugs-3.0.0.tar.gz
protobuf-2.5.0.tar.gz
apache-maven-3.0.5-bin.tar.gz
下载 hadoop2.4.0 的源码包
hadoop-2.4.0-src.tar.gz
压解源码包
[grid@hadoopMaster01 ~]$ tar -zxvf hadoop-2.4.0-src.tar.gz
安装编译所需软件
安装 MAVEN
压解 apache-maven-3.0.5-bin.tar.gz 到/opt/目录
[root@hadoopMaster01 grid]# tar -zxvf apache-maven-3.0.5-bin.tar.gz -C /opt/
修改/etc/profile 配置,增加 MAVEN 环境配置M2_HOME PATH
保存后使用 source /etc/profile 使修改配置即时生效
[root@hadoopMaster01 apache-maven-3.0.5]# source /etc/profile
使用 mvn -v 命令进行验证,如图所示表示安装配置成功
安装 ANT
压解 apache-ant-1.9.4-bin.tar.gz 到/opt/目录
[root@hadoopMaster01 grid]# tar -zxvf apache-ant-1.9.4-bin.tar.gz -C /opt/
修改/etc/profile 配置,增加 ANT 环境配置 ANT_HOME PATH
保存后使用 source /etc/profile 使修改配置即时生效
[root@hadoopMaster01 apache-ant-1.9.4]# source /etc/profile
使用 ant-version 命令进行验证,如图所示表示安装配置成功
安装 FINDBUGS
压解 findbugs-3.0.0.tar.gz 到/opt/目录
[root@hadoopMaster01 grid]# tar -zxvf findbugs-3.0.0.tar.gz -C /opt/
修改/etc/profile 配置,增加 FINDBUGS 环境配置
保存后使用 source /etc/profile 使修改配置即时生效
[root@hadoopMaster01 apache-ant-1.9.4]# source /etc/profile
使用 findbugs-version 命令进行验证,如图所示表示安装配置成功
安装 PROTOBUF
编译 Hadoop 2.4.0,需要 protobuf 的编译器protoc,一定要是 protobuf 2.5.0 以上
直接压解 protobuf-2.5.0.tar.gz
[root@hadoopMaster01 grid]# tar -zxvf protobuf-2.5.0.tar.gz
安装 protobuf,依次执行如下命令
[root@hadoopMaster01 grid]# cd protobuf-2.5.0
[root@hadoopMaster01 protobuf-2.5.0]# ls
aclocal.m4 config.guess configure COPYING.txt examples
install-sh ltmain.sh Makefile.in protobuf.pc.in src
autogen.sh config.h.in configure.ac depcomp generate_descriptor_proto.sh
INSTALL.txt m4 missing python vsprojects
CHANGES.txt config.sub CONTRIBUTORS.txt editors gtest
java Makefile.am protobuf-lite.pc.in README.txt
[root@hadoopMaster01 protobuf-2.5.0]# ./configure
[root@hadoopMaster01 protobuf-2.5.0]# make
[root@hadoopMaster01 protobuf-2.5.0]# make check
[root@hadoopMaster01 protobuf-2.5.0]# make install
使用 protoc --version 命令进行验证,如图所示表示安装配置成功
安装 依赖包
安装 cmake,openssl-devel,ncurses-devel 依赖包(root 用户且能够连上互联网)
[root@hadoopMaster01 ~]# yum install cmake
如下图表示安装成功
[root@hadoopMaster01 ~]# yum install openssl-devel
如下图表示安装成功
[root@hadoopMaster01 ~]# yum install ncurses-devel
如下图表示依赖包系统中已经安装并且为最新版本
编译 64 位本地库
进入已压解的 hadoop 源码目录
[grid@hadoopMaster01 ~]$ cd hadoop-2.4.0-src
[grid@hadoopMaster01 hadoop-2.4.0-src]$ pwd
/home/grid/hadoop-2.4.0-src
执行 mvn cleaninstall -DskipTests 命令,等待完成(会自动联网下载很多东西)
[grid@hadoopMaster01 hadoop-2.4.0-src]$ mvn clean install -DskipTests
执行 mvn package -Pdist,native -DskipTests -Dtar 命令,开始编译,等待完成
grid@hadoopMaster01 hadoop-2.4.0-src]$ mvn package -Pdist,native -DskipTests -Dtar
出现如下信息
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS[6.304s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [26.555s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS[2.757s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.216s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [19.592s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [2.715s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [2.360s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [2.950s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS[2.119s]
[INFO] Apache Hadoop Common .............................. SUCCESS [1:22.302s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [5.095s]
[INFO] Apache Hadoop Common Project...................... SUCCESS [0.026s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [2:06.178s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [1:09.142s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [14.457s]
[INFO] Apache Hadoop HDFS-NFS ............................SUCCESS [2.859s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.030s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.029s]
[INFO] hadoop-yarn-api ................................... SUCCESS [59.010s]
[INFO] hadoop-yarn-common ................................ SUCCESS [20.743s]
[INFO] hadoop-yarn-server ................................ SUCCESS[0.026s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [7.344s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [11.726s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [2.508s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [4.041s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [10.370s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.374s]
[INFO] hadoop-yarn-client ................................ SUCCESS [4.791s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.025s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [2.242s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [1.553s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.024s]
[INFO] hadoop-yarn-project ............................... SUCCESS [3.261s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.082s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [18.549s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [13.772s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [2.441s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [6.866s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [6.280s]
[INFO] hadoop-mapreduce-client-jobclient .................SUCCESS [3.510s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [1.725s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [4.641s]
[INFO] hadoop-mapreduce .................................. SUCCESS[3.002s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [3.497s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [5.847s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [1.791s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [4.693s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [3.235s]
[INFO] Apache Hadoop Data Join........................... SUCCESS [2.349s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [2.488s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [5.863s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [3.776s]
[INFO] Apache Hadoop Client .............................. SUCCESS [5.235s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.070s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [3.935s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [4.392s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.022s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [21.274s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10:25.147s
[INFO] Finished at: Mon Jul 28 16:09:56 CST 2014
[INFO] Final Memory: 75M/241M
[INFO] ------------------------------------------------------------------------
表示编译成功
进入/home/grid/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/lib/native 检查,使用 file *命
令,如下图已经成功将编译 64 本地库
将 64 位的 native 文件夹替换原 32 位的文件夹即可

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
ELK菜鸟手记 (四) - 利用filebeat和不同端口把不同服务器上的log4j日志传输到同一台ELK服务器
1. 问题描述 我们需要将不同服务器(如Web Server)上的log4j日志传输到同一台ELK服务器,介于公司服务器资源紧张(^_^) 2. 我们需要用到filebeat 什么是filebeat? filebeat被用来ship events,即把一台服务器上的文件日志通过socket的方式,传输到远程的ELK。 可以传输到logstash,也可以直接传输到elasticsearch。 3. 我们这里讲解如何传输到远程的logstash,然后再由elasticsearch讲数据传输到kibana展示 3-1) 首先你要在你的本地测试机器上安装filebeat 以下是下载路径: https://www.elastic.co/downloads/beats/filebeat 3-2) 其次你应该配置你的filebeat.xml filebeat.prospectors: - input_type: log # Paths that should be crawled and fetched. Glob based paths. paths: - /Users/KG/Document...
- 下一篇
CentOS7 搭建Ambari-Server,安装Hadoop集群(一)
2017-07-05:修正几处拼写错误,之前没发现,抱歉! 第一次在cnblogs上发表文章,效果肯定不会好,希望各位多包涵。 编写这个文档的背景是月中的时候,部门老大希望我们能够抽时间学习一下Hadoop大数据方面的技术;给我的学习内容是通过Ambari安装Hadoop集群。通过一周左右的学习和实践,整理出现在这篇安装心得。第一篇,重点放在Ambari-Server的搭建安装上。 安装默认使用Root用户,避免权限问题导致不成功。 使用4台虚拟机构建Ambari-Server、Hadoop集群,分配如下: - 一台虚拟机,作为Ambari-Server: Hostname: ambari.server - 三台虚拟机,作为Hadoop集群: Hostname01: hadoop.namenode Hostname02: hadoop.datanode1 Hostname03: hadoop.datanode2 1. 安装前的系统设定 a) 修改机器名、Hosts文件 查看当前的Hostname: # hostname 修改Hostname:(以ambari.server为例) # h...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- CentOS8安装MyCat,轻松搞定数据库的读写分离、垂直分库、水平分库
- CentOS7设置SWAP分区,小内存服务器的救世主
- SpringBoot2整合Redis,开启缓存,提高访问速度
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池
- CentOS8,CentOS7,CentOS6编译安装Redis5.0.7
- CentOS7,CentOS8安装Elasticsearch6.8.6
- SpringBoot2整合MyBatis,连接MySql数据库做增删改查操作
- CentOS6,7,8上安装Nginx,支持https2.0的开启
- 2048小游戏-低调大师作品
- CentOS8编译安装MySQL8.0.19