首页 文章 精选 留言 我的

精选列表

搜索[最权威安装],共10000篇文章
优秀的个人博客,低调大师

安装hue报错

User: hadoop is not allowed to impersonate hue,如: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hue Bad status for request TOpenSessionReq(username='hadoop', password=None, client_protocol=6, configuration={}): TOpenSessionResp(status=TStatus(errorCode=0, errorMessage='Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hue', sqlState=None, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hue:14:13', 'org.apache.hive.service.cli.session.SessionManager:createSession:SessionManager.java:336', 'org.apache.hive.service.cli.session.SessionManager:openSession:SessionManager.java:279', 'org.apache.hive.service.cli.CLIService:openSessionWithImpersonation:CLIService.java:189', 'org.apache.hive.service.cli.thrift.ThriftCLIService:getSessionHandle:ThriftCLIService.java:423', 'org.apache.hive.service.cli.thrift.ThriftCLIService:OpenSession:ThriftCLIService.java:312', 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession:getResult:TCLIService.java:1377', 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession:getResult:TCLIService.java:1362', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1149', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:624', 'java.lang.Thread:run:Thread.java:748', '*java.lang.RuntimeException:java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hue:22:8', 'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:89', 'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36', 'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63', 'java.security.AccessController:doPrivileged:AccessController.java:-2', 'javax.security.auth.Subject:doAs:Subject.java:422', 'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1886', 'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59', 'com.sun.proxy.$Proxy35:open::-1', 'org.apache.hive.service.cli.session.SessionManager:createSession:SessionManager.java:327', '*java.lang.RuntimeException:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hue:29:7', 'org.apache.hadoop.hive.ql.session.SessionState:start:SessionState.java:591', 'org.apache.hadoop.hive.ql.session.SessionState:start:SessionState.java:526', 'org.apache.hive.service.cli.session.HiveSessionImpl:open:HiveSessionImpl.java:168', 'sun.reflect.NativeMethodAccessorImpl:invoke0:NativeMethodAccessorImpl.java:-2', 'sun.reflect.NativeMethodAccessorImpl:invoke:NativeMethodAccessorImpl.java:62', 'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43', 'java.lang.reflect.Method:invoke:Method.java:498', 'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78', '*org.apache.hadoop.ipc.RemoteException:User: hadoop is not allowed to impersonate hue:53:24', 'org.apache.hadoop.ipc.Client:getRpcResponse:Client.java:1493', 'org.apache.hadoop.ipc.Client:call:Client.java:1439', 'org.apache.hadoop.ipc.Client:call:Client.java:1349', 'org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker:invoke:ProtobufRpcEngine.java:227', 'org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker:invoke:ProtobufRpcEngine.java:116', 'com.sun.proxy.$Proxy31:getFileInfo::-1', 'org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB:getFileInfo:ClientNamenodeProtocolTranslatorPB.java:796', 'sun.reflect.GeneratedMethodAccessor2:invoke::-1', 'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43', 'java.lang.reflect.Method:invoke:Method.java:498', 'org.apache.hadoop.io.retry.RetryInvocationHandler:invokeMethod:RetryInvocationHandler.java:422', 'org.apache.hadoop.io.retry.RetryInvocationHandler$Call:invokeMethod:RetryInvocationHandler.java:165', 'org.apache.hadoop.io.retry.RetryInvocationHandler$Call:invoke:RetryInvocationHandler.java:157', 'org.apache.hadoop.io.retry.RetryInvocationHandler$Call:invokeOnce:RetryInvocationHandler.java:95', 'org.apache.hadoop.io.retry.RetryInvocationHandler:invoke:RetryInvocationHandler.java:359', 'com.sun.proxy.$Proxy32:getFileInfo::-1', 'org.apache.hadoop.hdfs.DFSClient:getFileInfo:DFSClient.java:1717', 'org.apache.hadoop.hdfs.DistributedFileSystem$29:doCall:DistributedFileSystem.java:1526', 'org.apache.hadoop.hdfs.DistributedFileSystem$29:doCall:DistributedFileSystem.java:1523', 'org.apache.hadoop.fs.FileSystemLinkResolver:resolve:FileSystemLinkResolver.java:81', 'org.apache.hadoop.hdfs.DistributedFileSystem:getFileStatus:DistributedFileSystem.java:1523', 'org.apache.hadoop.fs.FileSystem:exists:FileSystem.java:1627', 'org.apache.hadoop.hive.ql.session.SessionState:createRootHDFSDir:SessionState.java:689', 'org.apache.hadoop.hive.ql.session.SessionState:createSessionDirs:SessionState.java:635', 'org.apache.hadoop.hive.ql.session.SessionState:start:SessionState.java:563'], statusCode=3), sessionHandle=None, configuration=None, serverProtocolVersion=8) 解决办法:配置core-site.xml和httpfs-site.xml时,通过命令找出应该配置哪个用户。 $HIVE_HOME/bin/beeline -u 'jdbc:hive2://localhost:10000/db_hive_test' -n hadoop -p hadoop 如果正常,则应该配置为hadoop.proxyuser.hadoop.hosts,如 <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>*</value> </property>

优秀的个人博客,低调大师

Cassandra 安装部署

python版本:2.7 jdk版本:1.8 Cassandra版本:3.11.2 官网: http://cassandra.apache.org/ 下载: wget http://mirrors.tuna.tsinghua.edu.cn/apache/cassandra/3.11.2/apache-cassandra-3.11.2-bin.tar.gz 1.解压: tar xvzf apache-cassandra-3.11.2-bin.tar.gz -C ../app/ 2.修改目录名称: mv apache-cassandra-3.11.2/ cassandra-3.11.2/ 3.配置环境变量: vim ~/.bash_profile export CASSANDRA_HOME=/home/hadoop/app/cassandra-3.11.2 export PATH=$CASSANDRA_HOME/bin:$PATH source ~/.bash_profile 4.修改配置文件: vim $CASSANDRA_HOME/conf/cassandra.yaml 修改如下位置,为对应的主机名称: rpc_address: online101 listen_address: online101 - seeds: "online101" - cluster_name: 'online_01' # 可选项 修改 启动脚本.py文件 vim $CASSANDRA_HOME/bin/cqlsh.py DEFAULT_HOST = 'online101' cd $CASSANDRA_HOME/pylib python setup.py install 初始化类似: cassandra -f -R 进入cqlsh cqlsh 或者 cqlsh -ucassandra -pcassandra

优秀的个人博客,低调大师

Docker 安装 Python

欢迎关注大数据和人工智能技术文章发布的微信公众号:清研学堂,在这里你可以学到夜白(作者笔名)精心整理的笔记,让我们每天进步一点点,让优秀成为一种习惯! 方法一、通过 Dockerfile 构建 创建Dockerfile 首先,创建目录python,用于存放后面的相关东西。 runoob@runoob:~$ mkdir -p ~/python ~/python/myapp myapp目录将映射为python容器配置的应用目录 进入创建的python目录,创建Dockerfile FROM buildpack-deps:jessie # remove several traces of debian python RUN apt-get purge -y python.* # http://bugs.python.org/issue19846 # > At the moment, setting "LANG=C" on a Linux system *fundamentally breaks Python 3*, and that's not OK. ENV LANG C.UTF-8 # gpg: key F73C700D: public key "Larry Hastings <larry@hastings.org>" imported ENV GPG_KEY 97FC712E4C024BBEA48A61ED3A5CA953F73C700D ENV PYTHON_VERSION 3.5.1 # if this is called "PIP_VERSION", pip explodes with "ValueError: invalid truth value '<VERSION>'" ENV PYTHON_PIP_VERSION 8.1.2 RUN set -ex \ && curl -fSL "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" -o python.tar.xz \ && curl -fSL "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" -o python.tar.xz.asc \ && export GNUPGHOME="$(mktemp -d)" \ && gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \ && gpg --batch --verify python.tar.xz.asc python.tar.xz \ && rm -r "$GNUPGHOME" python.tar.xz.asc \ && mkdir -p /usr/src/python \ && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \ && rm python.tar.xz \ \ && cd /usr/src/python \ && ./configure --enable-shared --enable-unicode=ucs4 \ && make -j$(nproc) \ && make install \ && ldconfig \ && pip3 install --no-cache-dir --upgrade --ignore-installed pip==$PYTHON_PIP_VERSION \ && find /usr/local -depth \ \( \ \( -type d -a -name test -o -name tests \) \ -o \ \( -type f -a -name '*.pyc' -o -name '*.pyo' \) \ \) -exec rm -rf '{}' + \ && rm -rf /usr/src/python ~/.cache # make some useful symlinks that are expected to exist RUN cd /usr/local/bin \ && ln -s easy_install-3.5 easy_install \ && ln -s idle3 idle \ && ln -s pydoc3 pydoc \ && ln -s python3 python \ && ln -s python3-config python-config CMD ["python3"] 通过Dockerfile创建一个镜像,替换成你自己的名字 runoob@runoob:~/python$ docker build -t python:3.5 . 创建完成后,我们可以在本地的镜像列表里查找到刚刚创建的镜像 runoob@runoob:~/python$ docker images python:3.5 REPOSITORY TAG IMAGE ID CREATED SIZE python 3.5 045767ddf24a 9 days ago 684.1 MB 方法二、docker pull python:3.5 查找Docker Hub上的python镜像 runoob@runoob:~/python$ docker search python NAME DESCRIPTION STARS OFFICIAL AUTOMATED python Python is an interpreted,... 982 [OK] kaggle/python Docker image for Python... 33 [OK] azukiapp/python Docker image to run Python ... 3 [OK] vimagick/python mini python 2 [OK] tsuru/python Image for the Python ... 2 [OK] pandada8/alpine-python An alpine based python image 1 [OK] 1science/python Python Docker images based on ... 1 [OK] lucidfrontier45/python-uwsgi Python with uWSGI 1 [OK] orbweb/python Python image 1 [OK] pathwar/python Python template for Pathwar levels 1 [OK] rounds/10m-python Python, setuptools and pip. 0 [OK] ruimashita/python ubuntu 14.04 python 0 [OK] tnanba/python Python on CentOS-7 image. 0 [OK] 这里我们拉取官方的镜像,标签为3.5 runoob@runoob:~/python$ docker pull python:3.5 等待下载完成后,我们就可以在本地镜像列表里查到REPOSITORY为python,标签为3.5的镜像。 使用python镜像 在~/python/myapp目录下创建一个 helloworld.py 文件,代码如下: #!/usr/bin/python print("Hello, World!"); 运行容器 runoob@runoob:~/python$ docker run -v $PWD/myapp:/usr/src/myapp -w /usr/src/myapp python:3.5 python helloworld.py 命令说明: -v $PWD/myapp:/usr/src/myapp :将主机中当前目录下的myapp挂载到容器的/usr/src/myapp -w /usr/src/myapp :指定容器的/usr/src/myapp目录为工作目录 python helloworld.py :使用容器的python命令来执行工作目录中的helloworld.py文件 输出结果: Hello, World!

优秀的个人博客,低调大师

Hive安装使用

文档及下周网址 官网http://hive.apache.org 文档https://cwiki.apache.org/confluence/display/Hive/GettingStartedhttps://cwiki.apache.org/confluence/display/Hive/Home 下载http://archive.apache.org/dist/hive/ 必要条件Requirements Java 1.7Note: Hive versions 1.2 onward require Java 1.7 or newer. Hive versions 0.14 to 1.1 work with Java 1.6 as well. Users are strongly advised to start moving to Java 1.8 (see HIVE-8607). Hadoop 2.x (preferred推荐), 1.x (not supported by Hive 2.0.0 onward). Hive versions up to 0.13 also supported Hadoop 0.20.x, 0.23.x. Hive is commonly used in production(生产环境) Linux and Windows environment. Mac is a commonly used development environment. The instructions in this document are applicable to Linux and Mac. Using it on Windows would require slightly different steps. 上传hive包和myql包到linux系统 启动hdfs和yarn服务及MapReduce历史 sbin/start-dfs.sh sbin/start-yarn.sh sbin/mr-jobhistory-damon.sh start historyserver 解压及修改配置文件 hive依赖于hadoop tar -zxf apache-hive-0.13.1-bin.tar.gz -C /opt/modules/ -C 表示change的意思 把apache-hive-0.13.1-bin 重新命名为hive-0.13.1 mv apache-hive-0.13.1-bin hive-0.13.1 把hive-env.sh.template重新命令为hive-env.sh 修改hadoop目录 修改hive-en.sh 中的 HADOOP_HOME=/opt/modules/hadoop-2.5.0 修改hive配置文件目录 opt/modules/hive-0.13.1/conf 运行hive 在hdfs系统中添加目录 命令:bin/hdfs dfs -mkdir -p /user/hive/warehouse 和/tmp 把这两个目录放到一个组权限 set them chmod g+w before you can create a table in Hive命令:bin/hdfs dfs -chmod 777 /tmp bin/hdfs dfs -chmod g+w /user/hive/warehouse bin/hdfs dfs -chmod777 /user/hive/warehouse 前 后 运行 bin/hive 第一次运行比较慢,在创建元数据,默认会创建一个default库。库为空库,没有任何表。 创建表并执行count查询 在这个过程中会执行MapReduce 网页访问地址

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。