首页 文章 精选 留言 我的

精选列表

搜索[单机],共4013篇文章
优秀的个人博客,低调大师

Flink单机版安装与wordCount

Flink为大数据处理工具,类似hadoop,spark.但它能够在大规模分布式系统中快速处理,与spark相似也是基于内存运算,并以低延迟性和高容错性主城,其核心特性是实时的处理流数据。从此大数据生态圈又再填一员。。。具体详解,还要等之后再分享,这里就先简要带过~ Flink的机制: 当Flink启动时,会拉起一个jobmanager和一个或多个taskManager,jobmanager作用就好比spark中的driver,taskManager的作用就好比spark中的worker. flink源码:http://www.apache.org/dyn/closer.lua/flink/flink-0.10.1/flink-0.10.1-src.tgz 下载与hadoop2.6兼容版本:http://apache.dataguru.cn/flink/flink-0.10.1/flink-0.10.1-bin-hadoop26-scala_2.10.tgz 下载完毕后确定确定配置了jdk java -version 执行bin/start-local.sh 启动local模式 (conf下默认配置的是localhost 其他参数暂且不必配置) bin/start-local.sh tail log/flink-*-jobmanager-*.log 随后可以导入idea 进行wordcount测试 ,这里用官网的example包,记得导入 package test import org.apache.flink.api.scala._ import org.apache.flink.examples.java.wordcount.util.WordCountData /** * Created by root on 12/15/15. */ object WordCount { def main(args: Array[String]) { if (!parseParameters(args)) { return } val env = ExecutionEnvironment.getExecutionEnvironment val text = getTextDataSet(env) val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } } .map { (_, 1) } .groupBy(0) .sum(1) if (fileOutput) { counts.writeAsCsv(outputPath, "\n", " ") env.execute("Scala WordCount Example") } else { counts.print() } } private def parseParameters(args: Array[String]): Boolean = { if (args.length > 0) { fileOutput = true if (args.length == 2) { textPath = args(0) outputPath = args(1) true } else { System.err.println("Usage: WordCount <text path> <result path>") false } } else { System.out.println("Executing WordCount example with built-in default data.") System.out.println(" Provide parameters to read input data from a file.") System.out.println(" Usage: WordCount <text path> <result path>") true } } private def getTextDataSet(env: ExecutionEnvironment): DataSet[String] = { if (fileOutput) { env.readTextFile(textPath) } else { env.fromCollection(WordCountData.WORDS) } 运行一下子:

优秀的个人博客,低调大师

CentOS7 安装Redis 单机

1,下载Redis4.0.9 进入Redis中文网的下载页面 http://www.redis.cn/download.html 2,上传压缩包到linux系统 cd /user/local/java rz +选择本地Redis路径上传 tar -zxvfredis-4.0.9.tar.gz wget http://downloads.sourceforge.net/tcl/tcl8.6.1-src.tar.gztar -xzvf tcl8.6.1-src.tar.gzcd /usr/local/tcl8.6.1/unix/./configuremake && make install 不安装的的话,redis在maketest是会失败 cd redis-4.0.9/ make && make test && make install 至此安装过程完毕!!! 3,测试安装情况 cd/usr/local/java/redis-4.0.9/src ./redis-server 看到上面这个界面的时候就说明redis已经成功安装了。 4,以后台进程的方式启动Redis (1)redis utils目录下,有个redis_init_script脚本(2)将redis_init_script脚本拷贝到linux的/etc/init.d目录中,将redis_init_script重命名为redis_6379,6379是我们希望这个redis实例监听的端口号(3)修改redis_6379脚本的第6行的REDISPORT,设置为相同的端口号(默认就是6379)(4)创建两个目录:/etc/redis(存放redis的配置文件),/var/redis/6379(存放redis的持久化文件)(5)修改redis配置文件(默认在根目录下,redis.conf),拷贝到/etc/redis目录中,修改名称为6379.conf(6)修改redis.conf中的部分配置为生产环境 daemonize yes 让redis以daemon进程运行pidfile /var/run/redis_6379.pid 设置redis的pid文件位置port 6379 设置redis的监听端口号dir /var/redis/6379 设置持久化文件的存储位置 (7)启动redis,执行cd /etc/init.d, chmod 777 redis_6379,./redis_6379 start (8)确认redis进程是否启动,ps -ef | grep redis (9)让redis跟随系统启动自动启动 在redis_6379脚本中,最上面,加入两行注释,注释的意思是,redis服务必须在运行级2,3,4,5下被启动或关闭,启动的优先级是90,关闭的优先级是10。 # chkconfig: 2345 90 10 # description: Redis is a persistent key-value database 最后在执行 运行 chkconfig redis_6379 on 5,redis cli的使用 redis-cli SHUTDOWN,连接本机的6379端口停止redis进程 redis-cli -h 127.0.0.1 -p 6379 SHUTDOWN,制定要连接的ip和端口号 redis-cli PING,ping redis的端口,看是否正常 redis-cli,进入交互式命令行 6,如何在window客户端访问 1,修改6379.config 1) 将bind 127.0.0.1 注释掉2) 将protected-mode 设置为no 2,关闭linux防火墙CentOS7关闭/开启防火墙出现 Unit iptables.service failed to load 重启redis即可开开心心编码,快快乐乐生活。

优秀的个人博客,低调大师

在Ubuntu上安装Hadoop(单机模式)步骤

1. 安装jdk:sudo apt-get install openjdk-6-jdk 2. 配置ssh:安装ssh:apt-get install openssh-server 为运行hadoop的用户生成一个SSH key:$ ssh-keygen -t rsa -P "" 让你可以通过新生成的key来登录本地机器:$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys 3. 安装hadoop:下载hadoop tar.gz包并解压:tar -zxvf hadoop-2.2.0.tar.gz 4. 配置:- 在~/.bashrc文件中添加:export HADOOP_HOME=/usr/local/hadoopexport JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64export PATH=$PATH:$HADOOP_HOME/bin在修改完成后保存,重新登录,相应的环境变量就配置好了。 - 配置hadoop-env.sh:export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64 - 配置hdfs-site.xml:<property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description></property> - 配置mapred-site.xml:<property> <name>mapred.job.tracker</name> <value>localhost:9001</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description></property> - 配置hdfs-site.xml:<property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> 5. 通过 NameNode 来格式化 HDFS 文件系统$ /usr/local/hadoop/bin/hadoop namenode -format 6. 运行hadoop$ /usr/local/hadoop/sbin/start-all.sh 7. 检查hadoop的运行状况- 使用jps来检查hadoop的运行状况:$ jps - 使用netstat 命令来检查 hadoop 是否正常运行:$ sudo netstat -plten | grep java 8. 停止运行hadoop:$ /usr/local/hadoop/bins/stop-all.sh

优秀的个人博客,低调大师

实测JumpServer2.1堡垒机单机部署2021

安装JumpServer 基本要求 环境:centos7.7 + python3.6硬件配置: 2个CPU核心, 4G 内存, 50G 硬盘(最低)操作系统: Linux 发行版 x86_64Python = 3.6.xMariadb Server ≥ 5.5.56RedisNginx 服务器简单初始化 # yum install wget # mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup # wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo # yum makecache # systemctl stop firewalld # systemctl disable firewalld # vi /etc/selinux/config # setenforce 0 # yum install python3 ntpdate lrzsz mariadb-devel python36-devel gcc openldap-devel # ntpdate ntp1.aliyun.com # echo '*/1 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &>/dev/null' >> /var/spool/cron/root # cat >pip.conf<< EOF [global] index-url = http://pypi.douban.com/simple [install] use-mirrors =true mirrors =http://pypi.douban.com/simple/ trusted-host =pypi.douban.com EOF # pip3 install --upgrade pip 安装nginx yum install nginx systemctl start nginx systemctl enable nginx 安装数据库 # yum install mariadb-server # systemctl start mariadb # mysqladmin -u root -p password 123456 # mysql -uroot -p123456 > create database jumpserver default charset 'utf8' collate 'utf8_bin'; > grant all on jumpserver.* to jumpserver@127.0.0.1 identified by 'jumpserver'; > flush privileges; 相关端口 3306 安装redis # yum install epel-release # yum install redis # vi /etc/redis.conf requirepass 123456 # systemctl start redis # systemctl enable redis ## 相关端口 6379 创建 Python 虚拟环境(目录可以/data/soft/py3) python3.6 -m venv /opt/py3 载入 Python 虚拟环境 source /opt/py3/bin/activate 每次操作 JumpServer 都需要先载入 py3 虚拟环境 获取 JumpServer 代码 cd /opt && \ wget https://github.com/jumpserver/jumpserver/releases/download/v2.1.0/jumpserver-v2.1.0.tar.gz tar xf jumpserver-v2.1.0.tar.gz mv jumpserver-v2.1.0 jumpserver 安装编译环境依赖 cd /opt/jumpserver/requirements && \ pip3 install --upgrade pip && \ pip install pyasn1==0.1.2 && \ pip install six==1.5.0 && \ pip install cffi && \ pip install pbr && \ pip install wheel && \ pip3 install --upgrade setuptools && \ pip install -r requirements.txt 修改配置文件 cd /opt/jumpserver && \ cp config_example.yml config.yml && \ vi config.yml SECRET_KEY: tgvAPABVkCO2xCwYz1h3gUrhiGtW2yX33Cz2Q9C0M64S2U93V BOOTSTRAP_TOKEN: tSQ1yPvs0UPeKSaG DEBUG: fasle LOG_LEVEL: ERROR DB_ENGINE: mysql DB_HOST: 127.0.0.1 DB_PORT: 3306 DB_USER: jumpserver DB_PASSWORD: jumpserver DB_NAME: jumpserver HTTP_BIND_HOST: 0.0.0.0 HTTP_LISTEN_PORT: 8080 WS_LISTEN_PORT: 8070 REDIS_HOST: 127.0.0.1 REDIS_PORT: 6379 REDIS_PASSWORD: 123456 启动 JumpServer # cd /opt/jumpserver # ./jms start # ./jms start -d 后台运行 ##相关端口 8080 正常部署 KoKo 组件(go语言写的ssh客户端) cd /opt && \ wget https://github.com/jumpserver/koko/releases/download/v2.1.0/koko-v2.1.0-linux-amd64.tar.gz tar -xf koko-v2.1.0-linux-amd64.tar.gz && \ mv koko-v2.1.0-linux-amd64 koko && \ chown -R root:root koko && \ cd koko && \ cp config_example.yml config.yml vi config.yml CORE_HOST: http://127.0.0.1:8080 BOOTSTRAP_TOKEN: tSQ1yPvs0UPeKSaG ##BOOTSTRAP_TOKEN 需要从 jumpserver/config.yml 里面获取, 保证一致 LOG_LEVEL: ERROR SHARE_ROOM_TYPE: redis REDIS_HOST: 127.0.0.1 REDIS_PORT: 6379 REDIS_PASSWORD: 123456 REDIS_DB_ROOM: 6 ./koko -d ##相关端口 SSHD_PORT: 2222 HTTPD_PORT: 5000 正常部署 Guacamole 组件(类似远程桌面协议) 开始安装Guacamole 组件 rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm yum -y install ffmpeg-devel freerdp-devel pango-devel libssh2-devel libtelnet-devel libvncserver-devel libwebsockets-devel pulseaudio-libs-devel openssl-devel libvorbis-devel libwebp-devel cd /opt && \ wget -O /opt/guacamole.tar.gz https://github.com/jumpserver/docker-guacamole/archive/v2.1.0.tar.gz tar -xf guacamole.tar.gz && \ mv docker-guacamole-2.1.0 guacamole && \ cd /opt/guacamole && \ tar -xf guacamole-server-1.2.0.tar.gz && \ tar -xf ssh-forward.tar.gz -C /bin/ && \ chmod +x /bin/ssh-forward cd /opt/guacamole/guacamole-server-1.2.0 ./configure --with-init-dir=/etc/init.d && \ make && \ make install 安装java yum install -y java-1.8.0-openjdk 创建相关目录 mkdir -p /config/guacamole /config/guacamole/extensions /config/guacamole/record /config/guacamole/drive && \ chown daemon:daemon /config/guacamole/record /config/guacamole/drive && \ cd /config 安装tomcat9 wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.36/bin/apache-tomcat-9.0.36.tar.gz tar -xf apache-tomcat-9.0.36.tar.gz && \ mv apache-tomcat-9.0.36 tomcat9 && \ rm -rf /config/tomcat9/webapps/* && \ sed -i 's/Connector port="8080"/Connector port="8081"/g' /config/tomcat9/conf/server.xml && \ echo "java.util.logging.ConsoleHandler.encoding = UTF-8" >> /config/tomcat9/conf/logging.properties && \ ln -sf /opt/guacamole/guacamole-1.0.0.war /config/tomcat9/webapps/ROOT.war && \ ln -sf /opt/guacamole/guacamole-auth-jumpserver-1.0.0.jar /config/guacamole/extensions/guacamole-auth-jumpserver-1.0.0.jar && \ ln -sf /opt/guacamole/root/app/guacamole/guacamole.properties /config/guacamole/guacamole.properties 设置 Guacamole 环境 export JUMPSERVER_SERVER=http://127.0.0.1:8080 echo "export JUMPSERVER_SERVER=http://127.0.0.1:8080" >> ~/.bashrc export BOOTSTRAP_TOKEN=zxffNymGjP79j6BN echo "export BOOTSTRAP_TOKEN=zxffNymGjP79j6BN" >> ~/.bashrc export JUMPSERVER_KEY_DIR=/config/guacamole/keys echo "export JUMPSERVER_KEY_DIR=/config/guacamole/keys" >> ~/.bashrc export GUACAMOLE_HOME=/config/guacamole echo "export GUACAMOLE_HOME=/config/guacamole" >> ~/.bashrc export GUACAMOLE_LOG_LEVEL=ERROR echo "export GUACAMOLE_LOG_LEVEL=ERROR" >> ~/.bashrc export JUMPSERVER_ENABLE_DRIVE=true echo "export JUMPSERVER_ENABLE_DRIVE=true" >> ~/.bashrc Guacamole环境变量说明 JUMPSERVER_SERVER 指 core 访问地址 BOOTSTRAP_TOKEN 为 Jumpserver/config.yml 里面的 BOOTSTRAP_TOKEN 值 JUMPSERVER_KEY_DIR 认证成功后 key 存放目录 GUACAMOLE_HOME 为 guacamole.properties 配置文件所在目录 GUACAMOLE_LOG_LEVEL 为生成日志的等级 JUMPSERVER_ENABLE_DRIVE 为 rdp 协议挂载共享盘 启动 Guacamole /etc/init.d/guacd start sh /config/tomcat9/bin/startup.sh 下载 Lina 组件 cd /opt wget https://github.com/jumpserver/lina/releases/download/v2.1.0/lina-v2.1.0.tar.gz tar -xf lina-v2.1.0.tar.gz mv lina-v2.1.0 lina chown -R nginx:nginx lina 下载 luna组件 cd /opt wget https://github.com/jumpserver/luna/releases/download/v2.1.0/luna-v2.1.0.tar.gz tar -xf luna-v2.1.0.tar.gz mv luna-v2.1.0 luna chown -R nginx:nginx luna 配置 Nginx 整合各组件 echo > /etc/nginx/conf.d/default.conf vi nginx.conf #删除里面的server主机 vi /etc/nginx/conf.d/jumpserver.conf server { listen 80; client_max_body_size 100m; # 录像及文件上传大小限制 location /ui/ { try_files $uri / /index.html; alias /opt/lina/; } location /luna/ { try_files $uri / /index.html; alias /opt/luna/; # luna 路径, 如果修改安装目录, 此处需要修改 } location /media/ { add_header Content-Encoding gzip; root /opt/jumpserver/data/; # 录像位置, 如果修改安装目录, 此处需要修改 } location /static/ { root /opt/jumpserver/data/; # 静态资源, 如果修改安装目录, 此处需要修改 } location /koko/ { proxy_pass http://localhost:5000; proxy_buffering off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log off; } location /guacamole/ { proxy_pass http://localhost:8081/; proxy_buffering off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log off; } location /ws/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:8070; proxy_http_version 1.1; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /api/ { proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /core/ { proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location / { rewrite ^/(.*)$ /ui/$1 last; } } nginx -t nginx -s reload 登陆 http://192.168.4.246 默认用户/密码 admin/admin

优秀的个人博客,低调大师

单机安装 CentOS 5 + hadoop-0.20.0

这种安装方式仅仅适用于做实验,快速搭建Hadoop环境,不适合生产环境。 Ubuntu 环境 $ sudo apt-get install openjdk-7-jre 过程141.1.Master configure Download and Installing Software $ cd /usr/local/src/ $ wget http://apache.etoak.com/hadoop/core/hadoop-0.20.0/hadoop-0.20.0.tar.gz $ tar zxvf hadoop-0.20.0.tar.gz $ sudo cp -r hadoop-0.20.0 .. $ sudo ln -s hadoop-0.20.0 hadoop $ cd hadoop Configuration hadoop-env.sh $ vim conf/hadoop-env.sh export JAVA_HOME=/usr conf/core-site.xml $ vim conf/core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> conf/hdfs-site.xml $ vim conf/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> conf/mapred-site.xml $ vim conf/mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration> Setup passphraseless ssh Now check that you can ssh to the localhost without a passphrase: $ ssh localhost If you cannot ssh to localhost without a passphrase, execute the following commands: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Execution Format a new distributed-filesystem: $ bin/hadoop namenode -format Start the hadoop daemons: $ bin/start-all.sh When you're done, stop the daemons with: $ bin/stop-all.sh Monitor Browse the web interface for the NameNode and the JobTracker; by default they are available at: NameNode - http://localhost:50070/ JobTracker - http://localhost:50030/ Test $ bin/hadoop dfs -mkdir test $ echo helloworld > testfile $ bin/hadoop dfs -copyFromLocal testfile test/ $ bin/hadoop dfs -ls Found 1 items drwxr-xr-x - neo supergroup 0 2009-07-10 14:18 /user/neo/test $ bin/hadoop dfs -ls test $ bin/hadoop dfs –cat test/file 过程141.2.slave config SSH $ scp neo@master:~/.ssh/id_dsa.pub .ssh/master.pub $ cat .ssh/master.pub >> .ssh/authorized_keys Hadoop $ scp neo@master:/usr/local/hadoop /usr/local/hadoop 原文出处:Netkiller 系列 手札 本文作者:陈景峯 转载请与作者联系,同时请务必标明文章原始出处和作者信息及本声明。

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册