首页 文章 精选 留言 我的

精选列表

搜索[搭建],共10000篇文章
优秀的个人博客,低调大师

ubversion版本控制企业架构搭建之双机热备

1、前言 笔者公司需要一台svn服务器,但不幸的是公司给了一台破旧的服务器(本地用的,其他服务都跑在阿里云上),笔者担心没过多久服务器就挂点,故而想做svn的双机热备。 2、实践部分 2.1、环境 svnSer: ipaddress=10.168.0.176 hostname=svnSer svn-slaveSer: ipaddress=10.168.0.179 hostname=svn-slaveSer 2.2、svnSer安装配置(Master) In svnSer: 请参阅如下文章安装配置 http://cmdschool.blog.51cto.com/2420395/1715856 并增加如下配置 In svnSer: 1)定义svn备份管理员的用户名和密码 vim修改/var/local/svn/conf/passwd 增加如下定义: 1 bkuser = bkpwd 2)定义svn备份管理员的组、组成员和目录的组权限: vim修改/var/local/svn/conf/authz 1 2 3 4 5 [ groups ] admin_rw = bkuser [/] @admin_rw = rw 3)重启服务 1 /etc/init .d /svnserve restart 2.3、svn-slaveSer的安装配置(Slave) 2.3.1、step1 基本环境配置 In svn-slaveSer: 1)yum安装 1 yum -y install subversion 2)定义库文件夹 1 svnadmin create /var/local/svn 3)启动并设置服务开机启动 1 2 /etc/init .d /svnserve start chkconfig svnserve on 4)设置防火墙 vim编辑/etc/sysconfig/iptables 加入如下内容: 1 -A INPUT -m state --state NEW -m tcp -p tcp --dport 3690 -j ACCEPT 5)重启防火墙 1 /etc/init .d /iptables restart 2.3.2、step2 In svn-slaveSer: 1 2 3 4 cd /var/local/svn/hooks/ cp pre-revprop-change.tmpl pre-revprop-change chmod 744 pre-revprop-change ll pre-revprop-change vim编辑/var/local/svn/hooks/pre-revprop-change 修改最后一行 修改前: 1 exit 1 修改后: 1 exit 0 注:允许修改注释 2.3.3、step3 获取Master的配置文件 In svn-slaveSer: 1)复制配置文件 1 scp 10.168.0.176: /var/local/svn/conf/ * /var/local/svn/conf/ 2)重启服务 1 /etc/init .d /svnserve restart 2.3.4、step4 初始化同步信息 In svn-slaveSer 1 svnsync init file : ///var/local/svn/ svn: //10 .168.0.176 /var/local/svn 以上步骤会询问你svn的账号和密码,输入前面定义的用户名和密码: 用户名:bkuser 密码:bkpwd 2.3.5、step5 同步测试 In svn-slaveSer: 1 svnsync sync file : ///var/local/svn/ 以上步骤会询问你svn的账号和密码,输入前面定义的用户名和密码: 用户名:bkuser 密码:bkpwd 成功后的同步结果: 2.3.5、step5 设置用户提交自动同步 In svnSer: 1 2 3 4 cd /var/local/svn/hooks/ cp post-commit.tmpl post-commit chmod 700 post-commit ll post-commit vim编辑/var/local/svn/hooks/post-commit 末尾删除如下内容: 1 2 3 REPOS= "$1" REV= "$2" mailer.py commit "$REPOS" "$REV" /path/to/mailer .conf 增加如下内容: 1 svnsync sync --non-interactive svn: //10 .168.0.179 /var/local/svn --username bkuser --password bkpwd 以上语句建议先执行一次测试可用再写入post-commit 2.3.6、step6 检查同步情况: 1)桌面新建两个文件夹,pojectA与pojectB 2)分别做如下检出: 注意:以上检出的IP地址分别对应svnSer与svn-slaveSer 3)以上都单击【OK】后检出完毕,记录当前的版本号。 4)更新projetA文件夹的内容并做提交操作并记录提交后的版本号。 5)projetB做更新操作,并查看更新后的版本号是否是刚由projectA提交的版本号。 本文转自 tanzhenchao 51CTO博客,原文链接:http://blog.51cto.com/cmdschool/1716033,如需转载请自行联系原作者

优秀的个人博客,低调大师

使用 Kafka 和 ELK 搭建测试日志系统(1)

本文仅供自己学习,不合适转载。 这是两篇文章的第一部分。 1. 安装 ELK 1.1 安装 ElasticSearch 在海航云上创建一个 Ubutu 16.4 虚机,2核4GB内存。 (1)执行以下命令,更新系统 sudo apt-get update -y sudo apt-get upgrade -y (2)安装 Java sudo add-apt-repository -y ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java8-installer -y (3)安装ES wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.deb sudo dpkg -i elasticsearch-5.2.2.deb 修改/etc/elasticsearch/elasticsearch.yml: 将network.host 修改为本机 ip 即 192.168.10.102 将http.port 修改为 9200 将cluster.name 修改为 elk-test 重启 ES并检查服务状态: root@elk:/home/ubuntu# service elasticsearch restart root@elk:/home/ubuntu# service elasticsearch status ● elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enab Active: active (running) since Sat 2017-09-30 11:23:17 CST; 3s ago Docs: http://www.elastic.co Process: 3861 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code= Main PID: 3864 (java) Tasks: 15 Memory: 2.1G CPU: 4.511s 1.2 安装 Nginx 和 Logstash 创建另一台虚机,安装 Nginx 和 Logstash。 1.2.1 安装 Nginx apt-get install nginx Ubuntu安装之后的文件结构大致为: 所有的配置文件都在/etc/nginx下,并且每个虚拟主机已经安排在了/etc/nginx/sites-available下 程序文件在/usr/sbin/nginx 日志放在了/var/log/nginx中 并已经在/etc/init.d/下创建了启动脚本nginx 默认的虚拟主机的目录设置在了/var/www/nginx-default (有的版本 默认的虚拟主机的目录设置在了/var/www, 请参考/etc/nginx/sites-available里的配置) 启动并查看服务状态: root@elk:/home/ubuntu# /etc/init.d/nginx start [ ok ] Starting nginx (via systemctl): nginx.service. root@elk:/home/ubuntu# /etc/init.d/nginx status ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2017-09-30 11:40:59 CST; 1min 8s ago Main PID: 4320 (nginx) CGroup: /system.slice/nginx.service ├─4320 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; ├─4321 nginx: worker process └─4322 nginx: worker process Sep 30 11:40:59 elk systemd[1]: Starting A high performance web server and a reverse pro...r... Sep 30 11:40:59 elk systemd[1]: Started A high performance web server and a reverse prox...ver. Sep 30 11:42:06 elk systemd[1]: Started A high performance web server and a reverse prox...ver. Hint: Some lines were ellipsized, use -l to show in full. 为了测试起见,将端口修改为 88.修改文件/etc/nginx/sites-available/default,并重启 Nginx 服务: server { listen 88 default_server; listen [::]:88 default_server; 鉴于该服务器没有设置公网IP,在其路由器上设置端口转发规则,使得可以通过路由器的EIP的88端口访问到它上面的Nginx服务: 在浏览器上测试,Nginx 可用: 1.2.2 安装和配置 Logstash wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.tar.gz tar zxvf logstash-5.2.2.tar.gz ln -s logstash-5.2.2 logstash 创建文件 nginxlog2es.conf,内容如下。它会将 Nginx 的日志文件/var/log/nginx/access.log_json 中的日志发到 ES 服务器192.168.10.102:9200: input { file { path => "/var/log/nginx/access.log_json" codec => "json" } } filter { mutate { split => [ "upstreamtime", "," ] } mutate { convert => [ "upstreamtime", "float" ] } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["192.168.10.102:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" document_type => "%{type}" flush_size => 20000 idle_flush_time => 10 sniffing => true template_overwrite => true } } 修改 /etc/nginx/nginx.conf,添加: ## # Logging Settings ## log_format json '{"@timestamp":"$time_iso8601",' '"host":"$server_addr",' '"clientip":"$remote_addr",' '"size":$body_bytes_sent,' '"responsetime":$request_time,' '"upstreamtime":"$upstream_response_time",' '"upstreamhost":"$upstream_addr",' '"http_host":"$host",' '"url":"$uri",' '"xff":"$http_x_forwarded_for",' '"referer":"$http_referer",' '"agent":"$http_user_agent",' '"status":"$status"}'; access_log /var/log/nginx/access.log_json json; 重启 Nginx 服务,在浏览器上刷新页面,查看 Nginx 日志, {"@timestamp":"2017-09-30T12:44:19+08:00","host":"192.168.10.104","clientip":"140.206.84.10","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"120.132.124.103","url":"/index.nginx-debian.html","xff":"-","referer":"-","agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36","status":"304"} 启动logstash, nohup logstash/bin/logstash -f nginxlog2es.conf > /tmp/logstash.log 2>&1 & 刷新 Nginx 页面,能看到 logstash 收集到的 Nginx 日志: { "referer" => "-", "agent" => "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36", "http_host" => "120.132.124.103", "url" => "/index.nginx-debian.html", "path" => "/var/log/nginx/access.log_json", "upstreamhost" => "-", "@timestamp" => 2017-09-30T04:48:23.000Z, "size" => 0, "clientip" => "140.206.84.10", "host" => "192.168.10.104", "@version" => "1", "responsetime" => 0.0, "xff" => "-", "upstreamtime" => [ [0] 0.0 ], "status" => "304" } 1.3 安装 Kibana wget https://artifacts.elastic.co/downloads/kibana/kibana-5.2.2-linux-x86_64.tar.gz ln -s kibana-5.2.2-linux-x86_64 kibana 修改配置文件 kibana/config/kibana.yml, server.host: "192.168.10.102" elasticsearch.url: "http://192.168.10.102:9200" 启动 kibana, nohup kibana/bin/kibana > /tmp/kibana.log 2>&1 & 在浏览器里面输入http://120.132.124.103:5601/ 就可以打开 kibana 页面了。可以看到Nginx 的日志: 1.4 小结 从上面的步骤可以看出,ELK 的结构相对简单: Logstack 负责收集日志,并推送到 ES 中 ES 负责存储 Kibana 负责界面展示 ELK 的总体架构如下: 但是,这种架构有不少问题,其中问题之一是处理能力问题。bol.com 公司有如下的ELK架构演进路线: (1)初始架构(2013年) 问题是单实例的 logstash 有性能瓶颈。 (2)使用 redis 缓存以及多个 logstash 实例(2014年) 使用 redis 作为消息缓存,使用多实例 Logstash 增加处理性能。 参考链接: https://devops.profitbricks.com/tutorials/install-and-configure-apache-kafka-on-ubuntu-1604-1/ http://www.cnblogs.com/xiaoqi/p/elk-part1.html https://www.slideshare.net/TinLe1/elk-atlinked-in https://www.slideshare.net/renzotoma39/scaling-an-elk-stack-at-bolcom-39412550 https://www.elastic.co/blog/logstash-kafka-intro https://www.elastic.co/blog/just-enough-kafka-for-the-elastic-stack-part2 https://www.elastic.co/blog/just-enough-kafka-for-the-elastic-stack-part1 本文转自SammyLiu博客园博客,原文链接:http://www.cnblogs.com/sammyliu/p/7614209.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

在Ubuntu系统上搭建Hadoop 2.x(2.6.2)

前提条件 (1)Ubuntu操作系统(本教程使用的是Ubuntu 14.04) (2)安装JDK $ sudo apt-get install openjdk-7-jdk $ java -version java version "1.7.0_25" OpenJDK Runtime Environment (IcedTea 2.3.12) (7u25-2.3.12-4ubuntu3) OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode) $ cd /usr/lib/jvm $ ln -s java-7-openjdk-amd64 jdk (3)安装ssh $ sudo apt-get install openssh-server 添加Hadoop用户组和用户(可选) $ sudo addgroup hadoop $ sudo adduser --ingroup hadoop hduser $ sudo adduser hduser sudo 创建用户之后,使用hduser重新登陆ubuntu 安装SSH证书 $ ssh-keygen -t rsa -P '' ... Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. ... $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ ssh localhost 下载Hadoop 2.6.2 $ cd ~ $ wget http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.6.2/hadoop-2.6.2.tar.gz $ sudo tar vxzf hadoop-2.6.2.tar.gz -C /home/hduser $ cd /home/hduser $ sudo mv hadoop-2.6.2 hadoop $ sudo chown -R hduser:hadoop hadoop 配置Hadoop环境变量 (1)修改系统环境变量 $cd ~ $vi .bashrc 把下边的代码复制到vi打开的.bashrc文件末尾,如果JAVA_HOME已经配置过,那就不需要再配置了。 #Hadoop variables #begin of paste export JAVA_HOME=/usr/lib/jvm/jdk/ export HADOOP_INSTALL=/home/hduser/hadoop export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL #end of paste (2)修改hadoop环境变量 $ cd /home/hduser/hadoop/etc/hadoop $ vi hadoop-env.sh #必改的就一个,那就是修改JAVA_HOME,其他的可以不修改 export JAVA_HOME=/usr/lib/jvm/jdk/ 配置完成后,重新登陆Ubuntu(把terminal关掉,再打开) 输入下边的命令检查是否安装成功 $ hadoop version Hadoop 2.6.2 ... ... ... 配置Hadoop (1)core-site.xml $ cd /home/hduser/hadoop/etc/hadoop $ vi core-site.xml #把下边的代码复制到<configuration>和</configuration>中间 <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> (2)yarn-site.xml $ vi yarn-site.xml #把下边的代码复制到<configuration>和</configuration>中间 <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> (3)mapred-site.xml $ mv mapred-site.xml.template mapred-site.xml $ vi mapred-site.xml #把下边的代码复制到<configuration>和</configuration>中间 <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> (4)hdfs-site.xml $ cd ~ $ mkdir -p mydata/hdfs/namenode $ mkdir -p mydata/hdfs/datanode $ cd /home/hduser/hadoop/etc/hadoop $ vi hdfs-site.xml #把下边的代码复制到<configuration>和</configuration>中间 <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hduser/mydata/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hduser/mydata/hdfs/datanode</value> </property> 格式化一个新的分布式文件系统: $ cd ~ $ hdfs namenode -format 启动Hadoop服务 $ start-dfs.sh .... $ start-yarn.sh .... $ jps #如果配置成功的话,你会看到类似下边的信息 2583 DataNode 2970 ResourceManager 3461 Jps 3177 NodeManager 2361 NameNode 2840 SecondaryNameNode 运行Hadoop示例 hduser@ubuntu: cd /home/dhuser/hadoop hduser@ubuntu:/home/dhuser/hadoop$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar pi 2 5 #然后你会看到类似下边的信息 Number of Maps = 2 Samples per Map = 5 15/10/21 18:41:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Wrote input for Map #0 Wrote input for Map #1 Starting Job 15/10/21 18:41:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/10/21 18:41:04 INFO input.FileInputFormat: Total input paths to process : 2 15/10/21 18:41:04 INFO mapreduce.JobSubmitter: number of splits:2 15/10/21 18:41:04 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name ... 本文转自ZH奶酪博客园博客,原文链接:http://www.cnblogs.com/CheeseZH/p/5051135.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

OpenStack部署之前,如何搭建一个测试环境

一、虚拟机安装 1、根据规划创建虚拟机 2、安装ubuntu14.04 二、网络配置 1、根据规划,在vmware workstation中创建好对应的虚拟网络 2、配置虚拟机的IP地址 3、在每台虚拟机的hosts文件中增加对应的hostname和IP地址对应关系 每个节点除了 IP 地址之外,还必须能够解析其他节点的名称。例如,controller1这个名称必须解析为 10.0.0.11,即控制节点上的管理网络接口的 IP 地址。 4、用于计算节点的虚拟机开启CPU VT选项 5、用于计算节点的虚拟机安装KVM 6、块存储节点对象存储节点各有2块100G的磁盘,分别配置为LVM 本文转自 TtrToby 51CTO博客,原文链接:http://blog.51cto.com/freshair/1883300

优秀的个人博客,低调大师

菜鸟也学hadoop(1)_搭建单节点的hadoop

其实跟官方的教程一样 只是 我想写下来 避免自己搞忘记了,,,,好记性不如烂笔头 首先确认自己是否安装了 java, ssh 以及 rsync 没有装的直接就 apt-get install 了嘛,,,java的不一定要用sun的 OPEN的也好 主要方便。。。。不是重点 看着搞啦 然后 就是ssh免密码登录 这点 主要是因为 hadoop 需要通过ssh来启动salve列表中的各个主机守护进程,因为 分布式集群管理的 所以不管你是用的单机伪分布 或是分布 它都不分管理模式的,因此这一步必须搞 也简单 就两道命令 /opt/hadoop# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #后面学到的新的做法 记录一下 On source host: cd ~ mkdir . ssh chmod 700 . ssh cd . ssh ssh -keygen -t rsa -b 1024 cat id_rsa.pub copy contents of id_rsa.pub On destination host: cd ~ mkdir . ssh chmod 700 . ssh cd . ssh vi authorized_keys paste contents of id_rsa.pub from evnoltp9 into authorized keys :wq! #chmod 600 authorized_keys chmod og-rxw $HOME/. ssh $HOME/. ssh /authorized_keys 另外要注意请务必要将服务器上 ~/.ssh权限设置为700 ~/.ssh/authorized_keys的权限设置为600 试一试 是否可以免密码登录 ssh localhost 这样就完成了第一步 echo $JAVA_HOME 看看jdk安装在哪里 然后copy一下路径 一会用上 vim ./conf/hadoop-env.sh 当中大概第九条 把那里的JAVA_HOME 指定一下 export JAVA_HOME=/usr/lib/jdk/jdk1.7.0_07 ##保存 配置 hadoop核心配置文件 vim ./conf/core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> ##hdfs入口 </property> </configuration> 配置 hadoop 的HDFS配置文件 vim ./conf/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> ##文件副本数量 但节点 一个就够了 </property> </configuration> 配置 hadoop 的 MapReduce配置文件 vim ./conf/mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration> 这样 一个单节点的Hadoop就配置成功了 剩下的就是格式下文件系统 然后 启动就OK 了 首先是格式文件系统 hadoop namenode -format 接下来启动 start-all.sh 当然 你可以 单单启动 HDFS(start-dfs.sh) 也可以单单启动 MapReduce (start-mapred.sh) 验证是否成功 打开浏览器 localhost:50030 ##MapReduce WEB管理界面 localhost:50070 ##HDFS WEB管理界面 测试上传文件到 hdfs文件系统当中 hadoop fs -put 文件名 hdfs://localhost:9000/指定一个文件夹 或者 就放到根目录 刷新下 hdfs管理界面 就能看到那个文件了 测试 MapReduce 例子 用的 hadoop里面自带的例子 计算shell脚本当中的单词量 首先先上传 要计算的sh文件 hadoop fs -mkdir /input ##创建一个input文件夹 hadoop fs -put *.sh /input/ ##上传当前文件夹下所有的*.sh文件 到 hdfs 的 input 文件夹下 hadoop jar hadoop-examples-i.0.3.jar wordcount /input /output 开始计算 过程省略 仅仅使用于测试的 -----完成后 可以在 localhost:50070 文件管理系统当中发现一个output文件夹 点进入 在part-r-00000 当中记录了结果 前面是单词 后面是出现的次数 在localhost:50030 可以看到running jobs(这里显示 运行当中的) completed jobs (这里显示作业运行的统计)点进去可以看到一些 更详细的信息,自己研究啦 由于在 UBUNTU当中截图麻烦 我就不发图了 按照步骤一步一步来 亲测OK 我用的 UBUNTU 12.04 注:后面我看了书 hadoop实战 机械工业出版社的那本 说是不能用openJava 但是我原来看别人录制视频 说是可以, 我呢是用的 oracle的 大家如果用open的装不上 就用oracle的吧 没有实践过 本文转自 拖鞋崽 51CTO博客,原文链接:http://blog.51cto.com/1992mrwang/1011844

优秀的个人博客,低调大师

搭建mcollective高可用,使puppet架构更加安全、稳定

一、配置Rabbitmq 安装(略),可参考http://kisspuppet.com/2013/11/10/mcollective-middleware/或http://rsyslog.org/2013/11/10/mcollective-middleware/ 1.开启插件rabbitmq_stomp 1 2 3 4 [root@linuxmaster1poc ~]# rabbitmq-plugins enable rabbitmq_stomp The following plugins have been enabled: rabbitmq_stomp Plugin configuration has changed. Restart RabbitMQ for changes to take effect. 2.添加tcp监听端口和范围 1 2 3 4 [root@linuxmaster1poc ~]# vim /etc/rabbitmq/rabbitmq.config [ {rabbitmq_stomp, [{tcp_listeners, [ 61613 ]}]} ]. 备注:可参考http://www.rabbitmq.com/stomp.html 3.创建账户并设置权限 如果你以前配置过,建议将配置清空 1 2 3 4 5 6 7 8 9 [root@linuxmaster1poc ~]# rabbitmqctl stop_app Stopping node rabbit@linuxmaster1poc ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl reset Resetting node rabbit@linuxmaster1poc ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl start_app Starting node rabbit@linuxmaster1poc ... ...done. 删除默认用户guest,添加三个用户(webadmin-http访问用,admin--管理员,mcrabbitmq--mcollective链接用) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@linuxmaster1poc ~]# rabbitmqctl list_users Listing users ... guest [administrator] ...done. [root@linuxmaster1poc ~]# rabbitmqctl delete_user guest Deleting user "guest" ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl add_user mc_rabbitmq 123 .com Creating user "mc_rabbitmq" ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl add_user admin password= 123 .com Creating user "admin" ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl add_user web_admin 123 .com Creating user "web_admin" ... ...done. 设置用户的角色 1 2 3 4 5 6 [root@linuxmaster1poc ~]# rabbitmqctl set_user_tags admin administrator Setting tags for user "admin" to [administrator] ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl set_user_tags web_admin monitoring Setting tags for user "web_admin" to [monitoring] ... ...done. 创建虚拟主机组 1 2 3 [root@linuxmaster1poc ~]# rabbitmqctl add_vhost /mcollective Creating vhost "/mcollective" ... ...done. 设置用户访问虚拟主机组的权限 1 2 3 4 5 6 7 8 9 [root@linuxmaster1poc ~]# rabbitmqctl set_permissions -p "/mcollective" mc_rabbitmq ".*" ".*" ".*" Setting permissions for user "mc_rabbitmq" in vhost "/mcollective" ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl set_permissions -p "/mcollective" admin ".*" ".*" ".*" Setting permissions for user "admin" in vhost "/mcollective" ... ...done. [root@linuxmaster1poc ~]# rabbitmqctl set_permissions -p "/mcollective" web_admin ".*" ".*" ".*" Setting permissions for user "web_admin" in vhost "/mcollective" ... ...done. 重启rabbitmq-server服务 1 2 3 [root@linuxmaster1poc ~]# /etc/init.d/rabbitmq-server restart Restarting rabbitmq-server: SUCCESS rabbitmq-server. 查看用户以及角色是否创建成功 1 2 3 4 5 6 [root@linuxmaster1poc ~]# rabbitmqctl list_users Listing users ... admin [administrator] mc_rabbitmq [] web_admin [monitoring] ...done. 查看虚拟主机组“/mcollective”中所有用户的权限 1 2 3 4 5 6 7 [root@linuxmaster1poc ~]# rabbitmqctl list_permissions -p "/mcollective" Listing permissions in vhost "/mcollective" ... admin .* .* .* mc_rabbitmq .* .* .* web_admin .* .* .* ...done. [root@linuxmaster1poc ~]# 4、登录http://192.168.100.120:15672/设置虚拟主机“/mcollective”的exchanges 默认配置 1 2 3 4 5 6 7 8 9 10 [root@linuxmaster1poc ~]# rabbitmqctl list_exchanges -p "/mcollective" Listing exchanges ... direct amq.direct direct amq.fanout fanout amq.headers headers amq.match headers amq.rabbitmq. trace topic amq.topic topic ...done. 设置后更新配置 1 2 3 4 5 6 7 8 9 10 11 12 [root@linuxmaster1poc ~]# rabbitmqctl list_exchanges -p "/mcollective" Listing exchanges ... direct amq.direct direct amq.fanout fanout amq.headers headers amq.match headers amq.rabbitmq. trace topic amq.topic topic mcollective_broadcast topic mcollective_directed direct ...done. 备注:可参考官网设置https://www.rabbitmq.com/man/rabbitmqctl.1.man.html 二、配置MCollective: 1.配置mcollective client端 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [root@linuxmaster1poc testing]# cat /etc/mcollective/client.cfg topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective logger_type = console #loglevel = debug loglevel = warn # Plugins securityprovider = psk plugin.psk = a36cd839414370e10fd281b8a38a4f48 direct_addressing = 1 connector = rabbitmq plugin.rabbitmq.vhost = /mcollective #虚拟主机 plugin.rabbitmq.pool.size = 2 #设置地址池里有两个mq plugin.rabbitmq.initial_reconnect_delay = 0.01 plugin.rabbitmq.max_reconnect_delay = 30.0 #重连时间 plugin.rabbitmq.use_exponential_back_off = true plugin.rabbitmq.back_off_multiplier = 2 plugin.rabbitmq.max_reconnect_attempts = 0 plugin.rabbitmq.randomize = false plugin.rabbitmq.timeout = - 1 plugin.rabbitmq.pool. 1 .host = 192.168 . 100.120 plugin.rabbitmq.pool. 1 .port = 61613 plugin.rabbitmq.pool. 1 .user = mc_rabbitmq plugin.rabbitmq.pool. 1 .password = 123 .com plugin.rabbitmq.pool. 1 .ssl = false plugin.rabbitmq.pool. 2 .host = 192.168 . 100.121 plugin.rabbitmq.pool. 2 .port = 61613 plugin.rabbitmq.pool. 2 .user = mc_rabbitmq plugin.rabbitmq.pool. 2 .password = 123 .com plugin.rabbitmq.pool. 2 .ssl = false # Facts factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml 2.配置mcollective server端 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [root@linux57poc tmp]# cat /etc/mcollective/server.cfg # --Global-- topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective logfile = / var /log/puppet/mcollective.log loglevel = info daemonize = 1 # --rabbitmq Plugins-- securityprovider = psk plugin.psk = a36cd839414370e10fd281b8a38a4f48 direct_addressing = 1 connector = rabbitmq plugin.rabbitmq.vhost = /mcollective plugin.rabbitmq.pool.size = 2 plugin.rabbitmq.initial_reconnect_delay = 0.01 plugin.rabbitmq.max_reconnect_delay = 30.0 plugin.rabbitmq.use_exponential_back_off = true plugin.rabbitmq.back_off_multiplier = 2 plugin.rabbitmq.max_reconnect_attempts = 0 plugin.rabbitmq.randomize = false plugin.rabbitmq.timeout = - 1 plugin.rabbitmq.pool. 1 .host = 192.168 . 100.120 plugin.rabbitmq.pool. 1 .port = 61613 plugin.rabbitmq.pool. 1 .user = mc_rabbitmq plugin.rabbitmq.pool. 1 .password = 123 .com plugin.rabbitmq.pool. 1 .ssl = false plugin.rabbitmq.pool. 2 .host = 192.168 . 100.121 plugin.rabbitmq.pool. 2 .port = 61613 plugin.rabbitmq.pool. 2 .user = mc_rabbitmq plugin.rabbitmq.pool. 2 .password = 123 .com plugin.rabbitmq.pool. 2 .ssl = false # --Puppet provider specific options-- plugin.service.provider = puppet plugin.service.puppet.hasstatus = true plugin.service.puppet.hasrestart = true plugin.puppet.command = puppet agent plugin.puppet.splay = true plugin.puppet.splaylimit = 30 plugin.puppet.config = /etc/puppet/puppet.conf # --Facts-- factsource = yaml ##factsource = facter plugin.yaml = /etc/mcollective/facts.yaml 三、高可用测试 特别注意:节点mcollective的server.cfg中pool是有优先级的,默认数字小的生效,这点需要注意,也就是说当所有节点都连接在MQ2上的时候,启动MQ1,mco命令是无法使用的,因为它在运行的时候连接的是MQ1,而所有节点都连接在MQ2上。 1.停止MQ1,查看切换状态 1.1先看当前的节点连接状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [root@linuxmaster1poc ~]# mco ping #查看连接的节点 linux57poc time= 69.46 ms linux58poc time= 70.05 ms linux64poc time= 70.59 ms ---- ping statistics ---- 3 replies max: 70.59 min: 69.46 avg: 70.03 [root@linuxmaster1poc ~]# mco shell "lsof -i:61613" #查看所有节点监听的端口情况,可以看到目前都连接在linuxmaster1poc上。 Do you really want to send this command unfiltered? (y/n): y Discovering hosts using the mc method for 2 second(s) .... 3 Host: linux64poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 36625 root 6u IPv4 27771 0t0 TCP linux64poc: 40493 ->linuxmaster1poc: 61613 (ESTABLISHED) Host: linux58poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 11060 root 6u IPv4 34046 0t0 TCP linux58poc: 36295 ->linuxmaster1poc: 61613 (ESTABLISHED) Host: linux57poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME ruby 18076 root 6u IPv4 1351365 TCP linux57poc: 24698 ->linuxmaster1poc: 61613 (ESTABLISHED) [root@linuxmaster1poc ~]# /etc/init.d/rabbitmq-server stop Stopping rabbitmq-server: rabbitmq-server. 1.2再次运行mco查看切换状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [root@linuxmaster1poc ~]# mco ping linux58poc time= 73.54 ms linux64poc time= 74.61 ms linux57poc time= 75.39 ms ---- ping statistics ---- 3 replies max: 75.39 min: 73.54 avg: 74.51 [root@linuxmaster1poc ~]# mco shell "lsof -i:61613" Do you really want to send this command unfiltered? (y/n): y Discovering hosts using the mc method for 2 second(s) .... 3 Host: linux58poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 11060 root 6u IPv4 34046 0t0 TCP linux58poc: 36295 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 11060 root 9u IPv4 34137 0t0 TCP linux58poc: 47200 ->linuxmaster2poc: 61613 (ESTABLISHED) Host: linux64poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 36625 root 6u IPv4 27771 0t0 TCP linux64poc: 40493 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 36625 root 8u IPv4 27877 0t0 TCP linux64poc: 37472 ->linuxmaster2poc: 61613 (ESTABLISHED) Host: linux57poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME ruby 18076 root 9u IPv4 1351484 TCP linux57poc: 9309 ->linuxmaster2poc: 61613 (ESTABLISHED) 通过日志查看 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@linuxmaster1poc ~]# mco shell "lsof -i:61613" Do you really want to send this command unfiltered? (y/n): y Discovering hosts using the mc method for 2 second(s) .... 3 Host: linux58poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 11428 root 6u IPv4 34283 0t0 TCP linux58poc: 36300 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 11428 root 8u IPv4 34338 0t0 TCP linux58poc: 47205 ->linuxmaster2poc: 61613 (ESTABLISHED) Host: linux57poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME ruby 18447 root 6u IPv4 1351559 TCP linux57poc: 59343 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 18447 root 8u IPv4 1351622 TCP linux57poc: 29757 ->linuxmaster2poc: 61613 (ESTABLISHED) Host: linux64poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 37054 root 4u IPv4 28036 0t0 TCP linux64poc: 37476 ->linuxmaster2poc: 61613 (ESTABLISHED) ruby 37054 root 6u IPv4 27990 0t0 TCP linux64poc: 40497 ->linuxmaster1poc: 61613 (CLOSE_WAIT) 总结:可以看到之前的连接已经变成CLOSE_WAIT,新的连接被建立 2.停止MQ2,启动MQ1查看切换状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@linuxmaster2poc rabbitmq]# /etc/init.d/rabbitmq-server stop Stopping rabbitmq-server: rabbitmq-server. [root@linux57poc service]# lsof -i: 61613 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME ruby 18447 root 6u IPv4 1351559 TCP linux57poc: 59343 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 18447 root 8u IPv4 1351622 TCP linux57poc: 29757 ->linuxmaster2poc: 61613 (CLOSE_WAIT) [root@linux58poc ~]# lsof -i: 61613 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 11428 root 6u IPv4 34283 0t0 TCP linux58poc: 36300 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 11428 root 8u IPv4 34338 0t0 TCP linux58poc: 47205 ->linuxmaster2poc: 61613 (CLOSE_WAIT) [root@linux64poc ~]# lsof -i: 61613 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 37054 root 4u IPv4 28036 0t0 TCP linux64poc: 37476 ->linuxmaster2poc: 61613 (CLOSE_WAIT) ruby 37054 root 6u IPv4 27990 0t0 TCP linux64poc: 40497 ->linuxmaster1poc: 61613 (CLOSE_WAIT) [root@linuxmaster1poc ~]# /etc/init.d/rabbitmq-server start Starting rabbitmq-server: SUCCESS rabbitmq-server. 根据plugin.rabbitmq.maxreconnectdelay =30.0,需要过最多30秒,mcollective服务端会重新建立连接请求 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [root@linuxmaster1poc ~]# tailf / var /log/rabbitmq/rabbit\@linuxmaster1poc.log =INFO REPORT==== 24 -Dec- 2013 :: 11 : 00 : 45 === accepting STOMP connection < 0.332 . 0 > ( 192.168 . 100.126 : 36316 -> 192.168 . 100.120 : 61613 ) =INFO REPORT==== 24 -Dec- 2013 :: 11 : 00 : 45 === accepting STOMP connection < 0.348 . 0 > ( 192.168 . 100.125 : 18945 -> 192.168 . 100.120 : 61613 ) =INFO REPORT==== 24 -Dec- 2013 :: 11 : 00 : 45 === accepting STOMP connection < 0.382 . 0 > ( 192.168 . 100.127 : 40513 -> 192.168 . 100.120 : 61613 ) [root@linuxmaster1poc ~]# mco ping linux58poc time= 70.60 ms linux57poc time= 71.32 ms linux64poc time= 111.56 ms ---- ping statistics ---- 3 replies max: 111.56 min: 70.60 avg: 84.49 [root@linuxmaster1poc ~]# mco shell "lsof -i:61613" Do you really want to send this command unfiltered? (y/n): y Discovering hosts using the mc method for 2 second(s) .... 3 Host: linux58poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 11428 root 6u IPv4 34283 0t0 TCP linux58poc: 36300 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 11428 root 8u IPv4 34338 0t0 TCP linux58poc: 47205 ->linuxmaster2poc: 61613 (CLOSE_WAIT) ruby 11428 root 10u IPv4 34444 0t0 TCP linux58poc: 36316 ->linuxmaster1poc: 61613 (ESTABLISHED) Host: linux57poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME ruby 18447 root 10u IPv4 1351723 TCP linux57poc: 18945 ->linuxmaster1poc: 61613 (ESTABLISHED) Host: linux64poc Statuscode: 0 Output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ruby 37054 root 4u IPv4 28036 0t0 TCP linux64poc: 37476 ->linuxmaster2poc: 61613 (CLOSE_WAIT) ruby 37054 root 6u IPv4 27990 0t0 TCP linux64poc: 40497 ->linuxmaster1poc: 61613 (CLOSE_WAIT) ruby 37054 root 9u IPv4 28206 0t0 TCP linux64poc: 40513 ->linuxmaster1poc: 61613 (ESTABLISHED) 本文转自凌激冰51CTO博客,原文链接:http://blog.51cto.com/dreamfire/1344492,如需转载请自行联系原作者

优秀的个人博客,低调大师

用Pomelo 搭建一个简易的推送平台

前言 实际上,个人感觉,pomelo 目前提供的两个默认sioconnector和hybridconnector使用的协议并不适合用于做手机推送平台,在pomelo的一份公开ppt里面,有提到过, 网易的消息推送平台是基于pomelo开发的 (一个frontend 支持30w 长连接,消耗了3g 内存,如果我没记错数据应该是这样),不过,这里用的前端(frontend)实现的是基于MQTT协议,我估计这个基于MQTT协议实现的frontend,基本不可能开源出来.这里只是说,默认提供的frontend不适合用于构建大型的推送平台(c10m规模的),一般而言(c10k级别的),个人感觉还是够用的. 为了展示,更多pomelo 的相关特性,可能这里的逻辑业务,与实际有所不同.敬请注意 推送平台的架构图 整个应用的架构图: 后端 pomelo@0.4.3 前端 android web browser 开发约定 客户端请求对象 1 2 3 4 5 { "role" : "client/server" , "apikey" : "String" , "clientId" : "String" } 服务端返回对象 发给web management 1 2 3 4 5 { "code" : "Int httpCode ex: 200" , "msg" : "String" , "users" : "Array 客户端的clientId 值 ex:[" android1 "] " } 发给android客户端 1 2 3 4 { "code" : "Int httpCode ex: 200" , "msg" : "String" } 客户端访问用的route android: connector route = sio-connector.entryHandler.enter, 用于把当前客户端加入到推送频道当中 WebManagement: connector route = hybrid-connector.entryHandler.enter,用于连接服务器. backend route = pushserver.pushHandler.pushAll, 把消息推送到所有已连接的客户端. 后台编码 Pomelo 有个特点,就是约定开发,很多地方是约定好的配置,优点是,架构清晰,可读性好,缺点是,需要大量的文档支持,目前而言,pomelo的官方文档做的不好的地方就是,虽然文档都有了,但是太零散了,分类不清楚,还有就是文档没跟上开发,有时候,你不阅读里面源码根本不知道这个api要传那些参数. sioconnector / hybridconnector 由于pomelo 0.3 以后新增了一个新的connector:hybridconnector,支持socket和websocket,使用二进制通讯协议,但是除了,网页js版本和c 客户端实现了这个connector,其他客户端均还没实现,所以,我们还需要一个兼容android 客户端的connector: siocnnector,关于两个connector 具体比较,以后有空重写这篇的时候,暂时,你只要知道,这个两个connector,一个基于socket.io,一个基于socket和websocket 即可. app.js由于我们用到了两个不同的connector,所以要在app.js写上: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 // 支持 socket.io app.configure( 'production|development' , 'sio-connector' , function(){ app.set( 'connectorConfig' , { connector : pomelo.connectors.sioconnector }); }); //支持 websocket 和 socket app.configure( 'production|development' , 'hybrid-connector' , function(){ app.set( 'connectorConfig' , { connector : pomelo.connectors.hybridconnector, heartbeat : 300 , useDict: true , useProtobuf: true }); }); 经过这样的配置,我们就能够使用两个不同的connector了. 推送实现 用pomelo 进行消息的推送,非常便捷,由于,我们现在只关注推消息给全部客户端,那样就非常简单了. 推送流程: 根据uuid 把 android 客户端添加到各自的推送频道当中. web 端根据uuid 把消息推送的全部在线的客户端. 为了教学的方便,这里的uuid 硬编码为: xxx-xx--xx-xx 把客户端添加到相应的channel 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 //把客户端添加到推送列表中 PushRemote.prototype.add = function(uid, role, sid, channelName, cb){ var channel = this .channelService.getChannel(channelName, true ); if (role === 'server' ){ //web 服务端直接返回用户列表 cb( null , this .getUsers(channelName)); } else { if (!!channel){ channel.add(uid ,sid); } //uuid 告诉给服务端onAdd 事件 // [{uid: userId, sid: frontendServerId}] var server = [{uid: channelName, sid: sid}]; this .channelService.pushMessageByUids( 'onAdd' , {msg: "add ok" , users: this .getUsers(channelName)},server, function(err){ if (err){ console.log(err); return ; } }); } }; Frontend 利用rpc 调用pushserver 添加客户端到相应频道的方法. 1 2 3 4 5 6 7 8 9 10 11 12 //sid 统一为web managment 所在的 frontend server. this .app.rpc.pushserver.pushRemote.add(session, uid,role, 'connector-server-client' , uuid, function(err, users){ if (err){ console.log(err); return ; } if (users){ next( null , {code: 200 , msg: 'push server is ok.' , users: users}); } else { next( null ,{code: 200 , msg: "add ok" , users: users}); } }); web 管理端调用消息推送 1 2 3 4 5 6 7 8 9 10 11 Handler.prototype.pushAll = function(msg, session, next){ var pushMsg = this .channelService.getChannel(msg.apikey, false ); pushMsg.pushMessage( 'onMsg' ,{msg: msg.msg}, function(err){ if (err){ console.log(err); } else { console.log( 'push ok' ); next( null , {code: 200 , msg: 'push is ok.' }); } }); }; 以上就是主要客户端如何加入到推送队列的代码,以及web 管理端进行消息推送的主要代码,是不是很简单! 完整代码可以参阅我的githubhttps://github.com/youxiachai 有一点要注意的,如果pomelo 项目要部署到外网或者局域网,frontend 的host 要填写当前host 主机的ip 地址 例如: 1 2 3 "connector" : [ { "id" : "connector-server-1" , "host" : "127.0.0.1" , "port" : 3150 , "clientPort" : 3010 , "frontend" : true } ] 部署到某台服务器,需要修改 1 2 3 "connector" : [ { "id" : "connector-server-1" , "host" : "192.168.1.107" , "port" : 3150 , "clientPort" : 3010 , "frontend" : true } ] 客户端访问相应的host 的地址. 客户端和服务端的github 地址:https://github.com/youxiachai/pomelo-pushServer-Demo 附录 如果,你现在对pomelo感兴趣的话,你可以看下我写的pomelo 的系列教程(因为还没写好所以暂时只发布在我的博客)暂时一共四篇.基本涵盖了pomelo 大部分基本知识点. http://blog.gfdsa.net/tags/pomelo/ 广州有招nodejs 程序员(有两年android 开发经验..orz)的吗...能否给个面试机会,联系邮箱: youxiachai@gmail.com 参与的相关社区: github:https://github.com/youxiachai cnodejs(Top积分榜 14 ...):http://cnodejs.org/user/youxiachai 版权声明:原创作品,如需转载,请注明出处。否则将追究法律责任 本文转自youxiachai 博客,原文链接:http://blog.51cto.com/youxilua/1223909 如需转载请自行联系原作者

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册