首页 文章 精选 留言 我的

精选列表

搜索[安装],共10000篇文章
优秀的个人博客,低调大师

云服务器ECS下的FTP服务的安装配置与使用

简介 FTP 是File Transfer Protocol(文件传输协议)的英文简称,而中文简称为“文传协议”。用于Internet上的控制文件的双向传输。同时,它也是一个应用程序(Application)。基于不同的操作系统有不同的FTP应用程序,而所有这些应用程序都遵守同一种协议以传输文件。互联网上提供文件存储和访问服务的计算机,他们依照的是FTP协议提供服务!支持FTP协议的服务器就是FTP服务器!FTP协议提供存储和传输服务的一套协议。下载"(Download)和"上传"(Upload)。”下载”文件就是从远程主机拷贝文件至自己的计算机上;”上传”文件就是将文件从自己的计算机中拷贝至远程主机上。用Internet语言来说,用户可通过客户机程序向(从)远程主机上传(下载)文件。 工作原理 FTP采用客户端/服务端的工作模式(C/

优秀的个人博客,低调大师

Hadoop2.7实战v1.0之Hive-2.0.0+MySQL本地模式安装

已有环境: Hadoop-2.7.2+zookeeper-3.4.6完全分布式环境搭建(HDFS、YARN HA) Active namenode:sht-sgmhadoopnn-01 Hive服务端客户端、元数据库mysql部署在 active namenode机器上 User:hive Database:hive_local_meta 1.Install MySQL5.6.23 2.Create db and user sht-sgmhadoopnn-01:mysqladmin:/usr/local/mysql:>mysql -uroot -p mysql> create database hive_local_meta; Query OK, 1 row affected (0.04 sec) mysql> create user 'hive' identified by 'hive'; Query OK, 0 rows affected (0.05 sec) mysql> grant all privileges on hive_local_meta.* to 'hive'@'%'; Query OK, 0 rows affected (0.03 sec) mysql> flush privileges; Query OK, 0 rows affected (0.01 sec) 3.Install hive-2.0.0 [root@sht-sgmhadoopnn-01 tmp]# wget http://apache.communilink.net/hive/hive-2.0.0/apache-hive-2.0.0-bin.tar.gz [root@sht-sgmhadoopnn-01 tmp]# tar zxvf apache-hive-2.0.0-bin.tar.gz [root@sht-sgmhadoopnn-01 tmp]# mv apache-hive-2.0.0-bin /hadoop/hive [root@sht-sgmhadoopnn-01 tmp]# cd /hadoop/hive [root@sht-sgmhadoopnn-01 hive]# ll total 588 drwxr-xr-x 3 root root 4096 Mar 29 23:19 bin drwxr-xr-x 2 root root 4096 Mar 29 23:19 conf drwxr-xr-x 4 root root 4096 Mar 29 23:19 examples drwxr-xr-x 7 root root 4096 Mar 29 23:19 hcatalog drwxr-xr-x 4 root root 12288 Mar 29 23:19 lib -rw-r--r-- 1 root root 26335 Jan 22 12:28 LICENSE -rw-r--r-- 1 root root 513 Jan 22 12:28 NOTICE -rw-r--r-- 1 root root 4348 Feb 10 09:50 README.txt -rw-r--r-- 1 root root 527063 Feb 10 09:56 RELEASE_NOTES.txt drwxr-xr-x 4 root root 4096 Mar 29 23:19 scripts [root@sht-sgmhadoopnn-01 hive]# 4.Configure profile [root@sht-sgmhadoopnn-01 ~]# vi /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera export HADOOP_HOME=/hadoop/hadoop-2.7.2 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export HBASE_HOME=/hadoop/hbase-1.2.0 export ZOOKEEPER_HOME=/hadoop/zookeeper export HIVE_HOME=/hadoop/hive export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$PATH [root@sht-sgmhadoopnn-01 ~]# source /etc/profile [root@sht-sgmhadoopnn-01 ~]# 5.configure jdbc jar [root@sht-sgmhadoopnn-01 tmp]# wget http://ftp.nchu.edu.tw/Unix/Database/MySQL/Downloads/Connector-J/mysql-connector-java-5.1.36.tar.gz [root@sht-sgmhadoopnn-01 tmp]# tar zxvf mysql-connector-java-5.1.36.tar.gz [root@sht-sgmhadoopnn-01 tmp]# cd mysql-connector-java-5.1.36 [root@sht-sgmhadoopnn-01 mysql-connector-java-5.1.36]# ll total 1428 -rw-r--r-- 1 root root 90430 Jun 20 2015 build.xml -rw-r--r-- 1 root root 235082 Jun 20 2015 CHANGES -rw-r--r-- 1 root root 18122 Jun 20 2015 COPYING drwxr-xr-x 2 root root 4096 Mar 29 23:35 docs -rw-r--r-- 1 root root 972009 Jun 20 2015 mysql-connector-java-5.1.36-bin.jar -rw-r--r-- 1 root root 61423 Jun 20 2015 README -rw-r--r-- 1 root root 63674 Jun 20 2015 README.txt drwxr-xr-x 8 root root 4096 Jun 20 2015 src [root@sht-sgmhadoopnn-01 mysql-connector-java-5.1.36]# cp mysql-connector-java-5.1.36-bin.jar $HIVE_HOME/lib/ 6.Configure hive-site.xml 点击(此处)折叠或打开 [root@sht-sgmhadoopnn-01 ~]# cd $HIVE_HOME/conf [root@sht-sgmhadoopnn-01 conf]# cp hive-default.xml.template hive-default.xml [root@sht-sgmhadoopnn-01 conf]# vi hive-site.xml <?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive_local/warehouse</value> </property> <property> <name>hive.metastore.local</name> <value>true</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive_local_meta?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> </configuration> "hive-site.xml" 26L, 1056C written 7.第一次运行 bin/hive 客户端 [root@sht-sgmhadoopnn-01 hive-2.0.0]# cd bin [root@sht-sgmhadoopnn-01 bin]# hive Logging initialized using configuration in jar:file:/hadoop/hive-2.0.0/lib/hive-common-2.0.0.jar!/hive-log4j2.properties Exception in thread "main" java.lang.RuntimeException: Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema. If needed, don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql) [root@sht-sgmhadoopnn-01 bin]# ll total 64 -rwxr-xr-x 1 root root 1434 Feb 10 09:50 beeline -rwxr-xr-x 1 root root 2553 Dec 1 05:54 beeline.cmd drwxr-xr-x 3 root root 4096 Mar 29 23:19 ext -rwxr-xr-x 1 root root 8494 Feb 10 09:56 hive -rwxr-xr-x 1 root root 8713 Dec 1 05:54 hive.cmd -rwxr-xr-x 1 root root 1584 Apr 23 2015 hive-config.cmd -rwxr-xr-x 1 root root 1900 Apr 23 2015 hive-config.sh -rwxr-xr-x 1 root root 885 Apr 23 2015 hiveserver2 -rwxr-xr-x 1 root root 1030 Jan 22 12:28 hplsql -rwxr-xr-x 1 root root 2278 Jan 22 12:28 hplsql.cmd -rwxr-xr-x 1 root root 832 Apr 23 2015 metatool -rwxr-xr-x 1 root root 884 Apr 23 2015 schematool You have mail in /var/spool/mail/root 8.初始化 db [root@sht-sgmhadoopnn-01 bin]# schematool -initSchema -dbType mysql Metastore connection URL: jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Starting metastore schema initialization to 2.0.0 Initialization script hive-schema-2.0.0.mysql.sql Initialization script completed schemaTool completed You have mail in /var/spool/mail/root [root@sht-sgmhadoopnn-01 bin]# 9.查看MySQL元数据,show tables sht-sgmhadoopnn-01.telenav.cn:mysqladmin:/usr/local/mysql:>mysql -uhive -p mysql> use hive; Database changed mysql> show tables; +---------------------------+ | Tables_in_hive | +---------------------------+ | aux_table | | bucketing_cols | ............... ............... | tbls | | txn_components | | txns | | type_fields | | types | | version | +---------------------------+ 55 rows in set (0.00 sec) 10.测试,创建表,插入数据 ## "tab制表符"分隔 [root@sht-sgmhadoopnn-01 conf]# vi /tmp/studentInfo.txt 1 a 26 110 2 b 29 120 ~ [root@sht-sgmhadoopnn-01 bin]# hive Logging initialized using configuration in jar:file:/hadoop/hive-2.0.0/lib/hive-common-2.0.0.jar!/hive-log4j2.properties Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. hive> create table studentinfo (id int,name string, age int,tel string) > row format delimited fields terminated by '\t' > stored as textfile; OK Time taken: 2.347 seconds hive> load data local inpath '/tmp/studentInfo.txt' into table studentinfo; Loading data to table default.studentinfo OK Time taken: 1.383 seconds hive> select * from studentinfo; OK 1 a 26 110 2 b 29 120 Time taken: 0.173 seconds, Fetched: 2 row(s) hive> exit(); 11.再次修改两个参数 # "hdfs://mycluster"是指$HADOOP_HOME/etc/hadoop/core-site.xml文件的fs.defaultFS的值(NameNode HA URI) 点击(此处)折叠或打开 [root@sht-sgmhadoopnn-01 conf]# vi hive-site.xml <property> <name>hive.metastore.warehouse.dir</name> <value>hdfs://mycluster/user/hive_local/warehouse</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://sht-sgmhadoopnn-01:3306/hive_local_meta?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> 11.再次验证修改的内容,是否成功创建classInfo表 [root@sht-sgmhadoopnn-01 bin]# hive Logging initialized using configuration in jar:file:/hadoop/hive-2.0.0/lib/hive-common-2.0.0.jar!/hive-log4j2.properties Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. hive> create table classInfo(id int,classname string,stucount int) row format delimited fields terminated by '\t' > stored as textfile; OK Time taken: 2.257 seconds hive>

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册