首页 文章 精选 留言 我的

精选列表

搜索[快速入门],共10000篇文章
优秀的个人博客,低调大师

Android -- NDK开发入门

第一步,建立一个普通的Android项目HelloNDK,然后在与src同一级的目录下新建一个jni目录; 第二步,在jni目录下新建一个hello_ndk.c文件,代码如下: #include <string.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/ioctl.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <jni.h> #include <android/log.h> /********获取字符串************* */ jstring Java_com_example_hellondk_MainActivity_readJNIString( JNIEnv* env, jobject obj) { return (*env)->NewStringUTF(env, "Hello from JNI --- 22222 !"); } 说明如下: Java_com_example_hellondk_MainActivity_readJNIString // 方法名,由三部分组成,Java + [com_example_hellondk_HelloNDKActivity](包名+activity名) + [readJNIString](在java代码中调用的方法名) (*env)->NewStringUTF(env, "Hello from JNI --- 22222 !"); //在java中接收的返回值,不能直接return,一定要调用(*env)->NewStringUTF()这个方法返回 第三步,在jni目录下新建一个Android.mk文件,代码如下: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := hello_ndk LOCAL_SRC_FILES := hello_ndk.c LOCAL_LDLIBS := -llog include $(BUILD_SHARED_LIBRARY) 说明如下: LOCAL_MODULE := hello_ndk //这个与生成.so文件名有关,和导入Native Support的.so文件有关 LOCAL_SRC_FILES := hello_ndk.c //这个是读取相关的C文件名 第四步,配置android项目的native support,选中项目右键Android Tools--Add Native Support, 打开Add Android Native Support对话框,输入第三步中配置的LOACL_MODULE的值,如下图: 第五步,修改MainActivity.java文件,代码如下: package com.example.hellondk; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.widget.TextView; public class MainActivity extends Activity { TextView mTestTv; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mTestTv = (TextView)findViewById(R.id.test_tv); mTestTv.setText(readJNIString()); } private native String readJNIString(); static { System.loadLibrary("hello_ndk");//引入hello_ndk.so } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } } 说明如下: 项目运行的时候首先会执行static里面的方法,引入hello_ndk.so,然后执行onCreate里的方法,读取C文件里的方法获取返回值并显示在TextView控件上。 第六步,运行Android Application项目,程序会首先生成.so文件,成功后继续运行android项目,生成.so的过程如下: 我是天王盖地虎的分割线 本文转自我爱物联网博客园博客,原文链接:http://www.cnblogs.com/yydcdut/p/3908734.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

物联网入门指南

智能门锁、智能温控、智能汽车……你可能最近经常听到这些名词,今后他们会出现的更频繁。但是这些到底是什么呢?是什么让他们那么智能呢? 这些设备被称为物联网(loT),简单来说,物联网是让日常物品连接到互联网并且互相通讯,目的是让用户有更智能、高效的体验。最近的一些例子包括NEST烟雾探测器和August门锁。 但是与任何新技术相比,物联网可能会给消费者带来疑惑和恐惧,尤其是越来越多的公司加入到这场标准、安全、隐私之战的漩涡中。我编写了一个常见问题清单,让大家更容易明白物联网怎么工作,这些设备在现实生活中怎么用,它带来的一些问题和面临的一些挑战。 跟我对话的人物来自于物联网产品行业和行业标准企业,包括苹果,SmartThings,物联网联盟,AllSeen联盟,the Open Interconnect联盟和the Thread Group。 到底什么是物联网? “物联网”这个流行词背后的思路是让无生命的物体带有内置的无线连接,让他们可以监控,控制,并通过移动应用程序连接到互联网上。 连接对象的类型涵盖范围广泛,从穿戴物到灯泡家电(如咖啡机,洗衣机,甚至你的车),真的是任何东西。它也被应用到像医疗保健行业和交通系统这种垂直市场上。 行,我觉得我懂了,但是你能给我个例子吗,它现在是怎么被用在现实生活中,又是怎么让我的生活更便利的呢? 其中一个广为人知的例子是NEST温控器。这个通过wifi连接的温控器让你可以通过你的手机远程调节温度,并且有自己的记忆模式去创造一个温度设定表。 这个的潜在价值就在于帮你节省费用,如果出门前忘记关空调的话,你可以远程关闭它。这也是一个便利的因素,NEST可以记住你喜欢在睡觉时调低温度,可以在你设定的时间自动调低。 另一家公司是在八月份被三星收购的SmartThings,提供各种传感器和智能家居套件,可以监测谁进出你的房子,可以提醒你潜在的漏水危机,让业主安心。 随着物联网的扩大,产品日益成熟,我们可以设想这样一个场景:你的健身追踪器发现你已经睡着了,然后自动关闭电视和灯光。或者,出门前,你的车可以查看工作日程,并自动提供最佳路线去赴会,或者在你要迟到的时候发送一条简讯给有关各方。 从大的方面来说,城市可以利用它来监测数不胜数的停车位、空气、水质和交通。 物联网是怎么工作的? 首先,基础上来说,各种无线电让这些设备之间的连接和联网成为可能。包括我们更熟悉的wifi、低功耗蓝牙、NFC、RFID,还有一些你可能没有听说过的,像 ZigBee, Z-Wave和6LoWPAN。 然后是这些设备本身,不管是运动监测器、门锁或是灯泡,还有可能是中心轮毂,会允许不同的设备彼此连接。 最后,还有云服务,它使数据得以收集和分析,所以我们才能看到到底发生了什么,也才能通过手机应用程序来控制。 正在研究物联网的是些什么公司? 在这一点上,更简单的问题应该是说谁没有在研究物联网产品。知名的大公司像是三星、LG、苹果、google、Lowe’s、飞利浦都致力于这方面,和一些小公司、初创公司一样。Gartner预测,49亿物联网设备将在今年投入使用,到2020年这个数字将达到250亿。 那么,是不是所有的设备都能相互通讯呢? 这就有一点复杂了。因为很多公司都在研发不同的产品、技术、平台,让所有设备彼此都能通讯就不再那么简单了,无缝整体兼容应该是不会实现的。 一些组织正在努力搭建一个开放式标准,让不同产品保持互通性。其中包括the AllSeen Alliance,它的成员有高通、LG、微软、松下、索尼;还有 the Open Interconnect Consortium,包括英特尔、思科、通用、三星、惠普。 虽然他们的最终目标是一致的,但是依然有一些分歧需要克服。例如说,OIC表示 the AllSeen Alliance在安全和知识产权保护方面做的还不够。但是the AllSeen Alliance说因为他们有超过110家的成员,所以这不会是一个问题。 现在我们还不清楚这场标准之战最后会如何结束,但是我们能想到的是,最后一定会有三到四个不同的标准,而不是一家独大的局面(想想看IOS和Android)。 同时,对于消费者来说,解决这些问题的办法最好是选择一个支持多种无线电技术的设备,像是SmartThings提供的一样。 看上去这些设备会手机很多数据。我应该为隐私和安全担心吗? 智能家电、联网汽车、可穿戴设备收集的大量数据不免让人们开始担心起安全问题,他们恐惧这些个人数据会落入不法分子手中。接入点数量的增加同样带来了安全隐患。 美国联邦贸易委员会已经表示了关注,并且建议企业采取多种防护措施来保护消费者信息安全。然而委员会并没有权利强制执行物联网设备的相关规定,所以还不清楚会有多少企业听取他的意见。 很多公司都说保障安全和隐私是他们的重中之重。例如,苹果公司要求在其HomeKit平台开发产品时,必须有点对点加密、认证和隐私权政策。他们还承诺说不会从HomeKit平台附件中收集用户的任何数据。 我已经充分的了解了物联网,现在是买入的好时机了吗? 虽然物联网已经发展了很多年,但是现在刚开始进入我们的消费领域,类别也尚未成熟。不过也有好的产品出现。如果你现在想要购买,就像你的任何考虑一样,买你信任的公司的产品,确保你能得到一个真正实际解决你问题的方案。毕竟确保你的孩子从学校安全到家是一回事,但是在一台联网的crockpot上烤面包又是另一回事。 作者:何妍 来源:51CTO

优秀的个人博客,低调大师

hadoop安装入门

1.jdk安装和配置 1.1下载最新jdk文件 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html 1.2配置环境变量 vi /etc/profile 在文件末尾加入如下内容 JAVA_HOME=/usr/local/jdk JAVA_CLASSPATH=$JAVA_HOME/lib PATH=$JAVA_HOME/bin:$PATH export JAVA_HOME JAVA_CLASSPATH PATH 并使上面文件生效 source /etc/profile java -version 2.hadoop安装 首先需要配置运行环境,在etc/hadoop/hadoop-env.sh文件中增加 export JAVA_HOME=/usr/local/hadoop 一、配置core-site.xml /usr/local/hadoop/etc/hadoop/core-site.xml 包含了hadoop启动时的配置信息。 编辑器中打开此文件 sudo gedit /usr/local/hadoop/etc/hadoop/core-site.xml 在该文件的<configuration></configuration>之间增加如下内容: <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> 保存、关闭编辑窗口。 最终修改后的文件内容如下: 二、配置yarn-site.xml /usr/local/hadoop/etc/hadoop/yarn-site.xml包含了MapReduce启动时的配置信息。 编辑器中打开此文件 sudo gedit yarn-site.xml 在该文件的<configuration></configuration>之间增加如下内容: <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> 保存、关闭编辑窗口 最终修改后的文件内容如下 三、创建和配置mapred-site.xml 默认情况下,/usr/local/hadoop/etc/hadoop/文件夹下有mapred.xml.template文件,我们要复制该文件,并命名为mapred.xml,该文件用于指定MapReduce使用的框架。 复制并重命名 cp mapred-site.xml.template mapred-site.xml 编辑器打开此新建文件 sudo gedit mapred-site.xml 在该文件的<configuration></configuration>之间增加如下内容: <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> 保存、关闭编辑窗口 最终修改后的文件内容如下 四、配置hdfs-site.xml /usr/local/hadoop/etc/hadoop/hdfs-site.xml用来配置集群中每台主机都可用,指定主机上作为namenode和datanode的目录。 创建文件夹,如下图所示 你也可以在别的路径下创建上图的文件夹,名称也可以与上图不同,但是需要和hdfs-site.xml中的配置一致。 编辑器打开hdfs-site.xml 在该文件的<configuration></configuration>之间增加如下内容: <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/hdfs/data</value> </property> 保存、关闭编辑窗口 最终修改后的文件内容如下: 五、格式化hdfs hdfs namenode -format 只需要执行一次即可,如果在hadoop已经使用后再次执行,会清除掉hdfs上的所有数据。 六、启动Hadoop 经过上文所描述配置和操作后,下面就可以启动这个单节点的集群 执行启动命令: sbin/start-dfs.sh 执行该命令时,如果有yes /no提示,输入yes,回车即可。 接下来,执行: sbin/start-yarn.sh 执行完这两个命令后,Hadoop会启动并运行 执行 jps命令,会看到Hadoop相关的进程,如下图: 浏览器打开 http://localhost:50070/,会看到hdfs管理页面 浏览器打开http://localhost:8088,会看到hadoop进程管理页面 七、WordCount验证 dfs上创建input目录 bin/hadoop fs -mkdir -p input 把hadoop目录下的README.txt拷贝到dfs新建的input里 hadoop fs -copyFromLocal README.txt input 运行WordCount hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.4.0-sources.jar org.apache.hadoop.examples.WordCount input output 可以看到执行过程 运行完毕后,查看单词统计结果 hadoop fs -cat output/* 分类: 大数据 本文转自快乐就好博客园博客,原文链接:http://www.cnblogs.com/happyday56/p/4369853.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

Kubernates集群入门(1)

一、K8s安装准备 1.至少两台主机,一台作为master,一台作为node。两台主机需要关闭防火墙。 #centos6 service stop firewalld && service disable firewalld #centos7 systemctl stop iptables && systemctl disable iptables; 2.两台机器需要各自编辑/etc/hosts文件,互相添加hostname,然后相互ping通,以下为例 echo "192.168.18.128 centos-master 192.168.18.130 centos-minion " >> /etc/hosts 二、K8s的安装 1.两台主机都需要安装docker,kubernetes,如有docker版本冲突需要卸载重新安装docker. yum -y install docker kubernetes 2.master节点需要安装etcd数据库服务,etcd作为kubernetes的数据库 yum -y install etcd 3.每个节点,master及minion节点都需要修改kubernetes配置文件 vim /etc/kubernetes/config # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://centos-master:8080" #master节点关于指向etcd的ip可能需要改成127.0.0.1:2379,改成主机名的话kube-controller-manager可能会启动失败,不知原因 KUBE_ETCD_SERVERS="--etcd_servers=http://centos-master:2379" 示例master 下config # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://kube01:8080" KUBE_ETCD_SERVERS="--etcd-servers=http://kube01:2379" 4.master节点上,配置api服务给node vim /etc/kubernetes/apiserver # The address on the local server to listen to. #这个地址好像只能用0.0.0.0 KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_API_PORT="--port=8080" # Comma separated list of nodes in the etcd cluster #KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379" ##ServiceAccount这个参数删掉,会影响docker拉去镜像 # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" 示例:master节点apiserver: # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" # Add your own! KUBE_API_ARGS="" 5.master节点上编写启动相关kubernetes服务的脚本 vim k8s-server.sh #!/bin/bash OPT=$1 case $1 in -s) for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done ;; -k) for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler ; do systemctl stop $SERVICES done ;; -stat) for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl status $SERVICES done ;; *) echo "useage:./k8s-server.sh <-s|-k|-stat>---- '-s' is start Servers\n--- '-k' is stop Servers\n'-stat' is watch the status " ;; esac 6.node节点修改/etc/kubernetes/kubelet,配置与master的连接 ### # kubernetes kubelet (minion) config KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_PORT="--port=10250" KUBELET_HOSTNAME="--hostname_override=centos-minion" KUBELET_API_SERVER="--api_servers=http://centos-master:8080“ # Add your own! KUBELET_ARGS="" minion节点 config示例 ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://kube01:8080" minion节点kubelet示例 ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on #KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=kube02" # location of the api-server KUBELET_API_SERVER="--api-servers=http://kube01:8080" # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS="" 7.node节点编写启动和查看服务脚本 #!/bin/bash OPT=$1 case $1 in -s) for SERVICES in kube-proxy kubelet docker; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done ;; -k) for SERVICES in kube-proxy kubelet docker; do systemctl stop $SERVICES done ;; -stat) for SERVICES in kube-proxy kubelet docker; do systemctl status $SERVICES done ;; *) echo "useage:./k8s.sh <-s|-k|-stat>---- '-s' is start Servers\n--- '-k' is stop Servers\n'-stat' is watch the status " ;; esac 8.node节点查看是否成功注册到master节点,如果没关闭防火墙会报错 tail -f /var/log/messages |grep kube 9.master节点查看刚才注册的节点,节点status为ready为正常 kubectl get nodes 10.kubectl是master端的交互工具,可以通过子命令查看节点等信息 kubectl get nodes #获取节点列表 kubectl cluster-info #查看节点信息 下一节演示一个简单的kubernetes实例,master节点通过yaml文件,让node节点自动pull镜像并运行。** 如果启动docker报错,如下 当前docker版本1.13.1 执行启动命令: systemctl start docker ,报下面错误: Error starting daemon: SELinux is not supported with the overlay2 graph driver on this kernel. Either boot into a newer kernel or disable selinux in docker (--selinux-enabled=false) 重新编辑docker配置文件: vi /etc/sysconfig/docker # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs OPTIONS='--selinux-enabled=false --log-driver=journald --signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi :wq systemctl restart docker

优秀的个人博客,低调大师

Storm入门之附录A

本文翻译自《Getting Started With Storm》译者:吴京润 编辑:郭蕾 方腾飞 安装Storm客户端 Storm客户端能让我们使用命令管理集群中的拓扑。按照以下步骤安装Storm客户端: 从Storm站点下载最新的稳定版本(https://github.com/nathanmarz/storm/downloads)当前最新版本是storm-0.8.1。(译者注:原文是storm-0.6.2,不过翻译的时候已经是storm-0.8.1了) 把下载的文件解压缩到/usr/local/bin/storm的Storm共享目录。 把Storm目录加入PATH环境变量,这样就不用每次都输入全路径执行Storm了。如果我们使用了/usr/local/bin/storm,执行export PATH=$PATH:/usr/local/bin/storm。 最后,创建Storm本地配置文件:~/.storm/storm.yaml,在配置文件中按如下格式加入nimbus主机: nimbus.host:"我们的nimbus主机" 现在,你可以管理你的Storm集群中的拓扑了。 NOTE:Storm客户端包含运行一个Storm集群所需的所有Storm命令,但是要运行它你需要安装一些其它的工具并做一些配置。详见附录B。 有许多简单且有用的命令可以用来管理拓扑,它们可以提交、杀死、禁用、再平衡拓扑。 jar命令负责把拓扑提交到集群,并执行它,通过StormSubmitter执行主类。 1 storm jar path-to-topology-jar class-with-the-main arg1 arg2 argN path-to-topology-jar是拓扑jar文件的全路径,它包含拓扑代码和依赖的库。class-with-the-main是包含main方法的类,这个类将由StormSubmitter执行,其余的参数作为main方法的参数。 我们能够挂起或停用运行中的拓扑。当停用拓扑时,所有已分发的元组都会得到处理,但是spouts的nextTuple方法不会被调用。 停用拓扑: 1 storm deactivte topology-name 启动一个停用的拓扑: 1 storm activate topology-name 销毁一个拓扑,可以使用kill命令。它会以一种安全的方式销毁一个拓扑,首先停用拓扑,在等待拓扑消息的时间段内允许拓扑完成当前的数据流。 杀死一个拓扑: 查看源代码 打印 帮助 1 stormkilltopology-name NOTE:执行kill命令时可以通过-w [等待秒数]指定拓扑停用以后的等待时间。 再平衡使你重分配集群任务。这是个很强大的命令。比如,你向一个运行中的集群增加了节点。再平衡命令将会停用拓扑,然后在相应超时时间之后重分配工人,并重启拓扑。 再平衡拓扑: 1 storm rebalance topology-name NOTE:执行不带参数的Storm客户端可以列出所有的Storm命令。完整的命令描述请见:https://github.com/nathanmarz/storm/wiki/Command-line-client。 文章转自并发编程网-ifeve.com

优秀的个人博客,低调大师

elk之elasticsearch 入门

一、概述: 1、查看elasticsearch集群的健康状况: 1 [root@node115 kibana]# curl -X GET http://192.168.39.115:9200/_cat/health?v 2 epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 3 1487146452 16:14:12 elasticsearch green 2 2 42 21 0 0 0 0 - 100.0% 2、查看elasticsearch集群的节点信息: 1 [root@node115 kibana]# curl -X GET http://192.168.39.115:9200/_cat/nodes?v 2 ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 3 192.168.39.115 30 24 1 0.43 0.20 0.22 mdi * node-1 4 192.168.39.175 43 100 1 0.53 0.42 0.39 mdi - node-2 3、查看index信息: 1 [root@node115 kibana]# curl -X GET http://192.168.39.115:9200/_cat/indices?v 2 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size 3 green open .kibana e1EFtVxJSeS1jLxeX7kLSg 1 1 2 0 21.4kb 10.7kb 4 green open logstash-2017.02.15 i2rSKoflRtiEZoj9CmgEOA 5 1 6916294 0 4.3gb 2.1gb 5 green open logstash-2017.02.14 zMnftdmDQRShH86XsmwWlw 5 1 6127 0 4.8mb 2.4mb 6 green open filebeat-2017.02.15 alVH725IQgmwZiYvuW3Dsg 5 1 6922434 0 3.8gb 1.9gb 7 green open filebeat-2017.02.14 E2v3vgcPQz6jrDhxZMprOw 5 1 6127 0 4mb 2mb 4、创建index 1 PUT /customer?pretty #创建名为customer的index 5、查看index的详细信息: 1 curl -X GET http://192.168.39.115:9200/logstash-2017.02.15?pretty 6、删除index: 1 DELETE /customer?pretty #删除index (customer) 7、查看index上设置信息: 1 [root@node115 kibana]# curl -X GET http://192.168.39.115:9200/logstash-2017.02.16/_settings 2 { 3 "logstash-2017.02.16":{ 4 "settings":{ 5 "index":{ 6 "refresh_interval":"5s", 7 "number_of_shards":"5", 8 "provided_name":"logstash-2017.02.16", 9 "creation_date":"1487203204821", 10 "number_of_replicas":"1", 11 "uuid":"-Bwpq-5TT8mzLMJeU7PQ7A", 12 "version":{ 13 "created":"5020099" 14 } 15 } 16 } 17 } 18 } 8、修改index上的配置信息: 二、kafka相关: 查看topic bin/kafka-topics.sh --list --zookeeper localhost:2181

优秀的个人博客,低调大师

Docker入门(基于CentOS)

一 安装 Install with yum 查看当前Linux内核 uname -r 更新yum sudo yum update 添加yum仓库 sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'[dockerrepo]name=Docker Repositorybaseurl=https://yum.dockerproject.org/repo/main/centos/7/enabled=1gpgcheck=1gpgkey=https://yum.dockerproject.org/gpgEOF 安装docker yum install docker-engine 查看docker版本 docker -v 开启docker服务 systemctl enable docker.service 启动docker systemctl start docker 查看docker进程 ps -aux | grep docker 查看docker docker ps -a

优秀的个人博客,低调大师

SparkStreaming入门及例子

看书大概了解了下Streaming的原理,但是木有动过手啊。。。万事开头难啊,一个wordcount 2小时怎么都运行不出结果。是我太蠢了,好了言归正传。 SparkStreaming是一个批处理的流式计算框架,适合处理实时数据与历史数据混合处理的场景(比如,你用streaming将实时数据读入处理,再使用sparkSQL提取历史数据,与之关联处理)。Spark Streaming将数据流以时间片为单位分割形成RDD,使用RDD操作处理每一块数据,没块数据都会生成一个spark JOB进行处理,最终以批处理方式处理每个时间片的数据。(多的就不解释了,百度就好了~) 首先确保你安装了hadoop和spark,在IDEA中也已入来了相应jar包。 写吧- - 新手要注意红框部分,spark官网上给的例子是调用socketFileStream方法,这是通过socket连接远程的,倘若只在本机上测试学习,就用textFileStream读取本地文件路径,没错是路径不是文件,因为sparkStreaming是处理实时数据的,倘若直接指定一个文件,输出后是无法得到结果的。所以新建了个路径,在这里设置了Seconds(20)每20秒读取一次。随后run一下。 启动后,将准备好的文件cp到这个路径下,20秒过后结果就出来了,模拟了下实时数据。结束。

优秀的个人博客,低调大师

Docker入门者手册

Docker - Beginner's tutorial Docker is a relatively new and rapidly growing project that allows to create very light “virtual machines”. The quotation marks here are important, what Docker allows you to create are not really virtual machines, they’re more akin to chroots on steroids, a lot of steroids. Before we continue, let me clear something up. As of right now (4th of January of 2015)Docker works only on Linux, it cannot function natively on Windows or OSX. I’ll be talking about the architecture of Docker later on and the reason will become obvious. So if you want Docker on a platform that is not Linux, you’ll need to run Linux on a VM. This tutorialhas three objectives:explainingwhat problem it solves, explaininghow it solves itat a high level, andexplainingwhat technologies does it useto solve it. This is not a step-by-step tutorial,there arealreadymany goodstep-by-steptutorialson Docker,including an online interactive one from the authors of Docker. That said, there is a little step-by-step at the end, it's just there to connect all of the theory I present during the post with a clearcut realworld example, but is by no means exhaustive. What can Docker can do for you? Docker solves many of the same problem that a VM solves, plus some other that VMs could solve if they didn’t were so resource intensive. Hereare some of the things that Docker can deal with: Isolating an application dependencies Creating an application image and replicating it Creating ready to start applications that are easily distributable Allowing easy and fast scalation of instances Testing out applications and disposing them afterwards The idea behind Docker is to createportablelightweightcontainers for software applicationsthat can be run on any machine with Docker installed, regardless of the underlying OS, akin to the cargo containers used on ships. Pretty ambitious, and they’re succeeding. What does Docker do exactly? In this section I will not be explaining what technologies Docker uses to do what it does, or what specific commands are available, that’s on the last section, here I’ll explain the resources and abstractions that Docker offers. The two most important entities in Docker areimagesandcontainers. Aside from those,linksandvolumesare also important. Let’s start with images. Images Images on Docker are like thesnapshot of a virtual machine, but way more lightweight, way way way more lightweight (more on the next section). There are several ways to create an image on Docker, most of them rely on creating an new image based on an already existing image, and since there are public images to pretty much everything you need, including for all the major linux distributions, it’s not likely that you will not find one that suit your needs. If you however feel the need to build andimage from scratch,thereareways. To create an image you take one image and modify it to create a child image. This can be done either through a file that specifies a base image and the modifications that are to be done, or live by “running” an image, modifying it and committing it. There are advantages to each method, but generally you’ll want to use a file to specify the changes. Images have an unique ID, and an unique human-readable name andtag pair. Images can be called, for example, ubuntu:latest, ubuntu:precise, django:1.6, django:1.7, etc. Containers Now onto containers. From images you can create containers, this is the equivalent of creating a VM from a snapshot, but way more lightweight. Containers are the ones that run stuff. Let use an example, you could download an image of ubuntu (there is a public repository of images called thedocker registry), modify it by installing Gunicornand your Django app with all its dependencies, and then create a container from that image that runs your app when it starts. Containers, like VMs, are isolated (with one little caveat that I’ll discuss later). They also have an unique ID and a unique human-readablename. It’s necessary for containers to expose services, so Docker allows you to expose specific ports of a container. Containers have two main differences that separate them from VMs. The first one is that they are designed torun a single process, they don’t simulate well a complete environment (if that’s what you need check outLXC). You may be temptedto run a runitor supervisord instance and get several processes up,but it’s reallynot necessary(in my humble opinion). The whole single process vs multiple processes issomewhat ofan outstandingdebate. You should know that the Docker designers heavily promote the"one process per container approach", and that the only case where you really have no other option but to run more than one process is to run something like ssh, to access the container while it is running for debugging purposes, however the commanddocker execsolves that problem. The second big difference between containers and VMsis that when you stop a VM, no files are erased besides maybe some temporary files, when you stop a Docker container allchangesdone to the initialstate(the state of the image from which the container was created) arelost. This is one of the biggest changes in mindset that one must make when working with Docker:containers are ephemeral and disposable. Volumes So if your ecommerce website had just received payments for 30000$ that were already charged to the clients and you get a kernel panic, all changes to the database are lost...not very good publicity, for you or Docker, but fear not. Docker allows you to definevolumes, spaces inside the container that can hold persistent data. Docker forces you to define what parts are yourapplicationand what parts are yourdata, and demands that you keep themseparated. Volumes are specific to each container, you can create several containers from a single image and define different volumes for each. Volumes are stored in the filesystem of the host running Docker, you can specify the directory where a volume will be stored, or let Docker store them in a default location. Whatever is not a volume is stored in other type of filesystem, but more on that later. Links Linksare another very important part of Docker. Whenever a container is started, a random private IP is assigned to it, other containers can use this IP address to communicate with it. This is important for 2 reasons: first it provides a way for containers to talk to each other, second containers share a local network. I had a problem once when I started two elasticsearch containers for two clients on the same machine, but left the cluster name to the default setting, the two elasticsearch servers promptly made an unsolicited cluster. To allow intercontainer communicationDocker allows you to reference other existing containers when spinning up a new one, those referenced containers receive an alias (that you specify) inside the container you just created. We say that the two containers arelinked. So if my DB container is already running, I can create my webserver container and reference the DB container upon creation, giving it analias,dbappfor example. When inside my newly created webserver container I can use the hostnamedbappto communicate with my DB container at any time. Docker takes it one step further, requiring you to state which ports a container will make available to other containers when it is linked, otherwise no ports will be available. Portability of Docker images There is one caveat when creating images. Docker allows you specify volumes and ports in an image. Containers created from that image inherit those settings. However, Docker doesn’t allow you to specify anything on an image that is not portable. For example, you can define volumes in an image, just as long as they’re stored on the default location that Docker uses. This is because if you were to specifya certain directory within the host filesystem to store the volume, there is no guarantee that that directory will exists on every other host where thatimage might be used. You can define exposed ports, but only those ports that are exposed to other containers when links are created, you can’t specify ports exposed to the host, since you don't know wich ports will be available on the hosts that might use thatimage. You can’t define links on an image either. Making a link requires you to reference another container by name, and you can't know beforehand how will the containers be named on every host that might use the image. Images must be completely portable, Docker doesn’t allow otherwise. So those are the primary moving parts, you create images, use those to create containers that expose ports and have volumes if needed, and connect several containers together with links. How can this all work with little to no overhead? How does Docker do what it needs to be done? Two words:cgroupsandunion filesystems. Docker uses cgroups to provide container isolation, and union filesystem to store the images and make containers ephemeral. Cgroups This is a Linux kernel feature that makes two things possible: Limit resource utilization (RAM, CPU) for Linuxprocess groups Make PID, UTS, IPC, Network, User and mount namespaces for process groups The keyword here is namespace. APID namespace, for example, permits processes in it to use PIDs isolated and independent of the main PID namespace, so you could have your own init process with a PID of 1 within a PID namespace. Analogous for all the other namespaces. You can then use cgroups to create an environment where processes can be executed isolated from the rest of your OS, but the key here is that the processes on this environmentuse your already loaded and running kernel, so the overhead is pretty much the same as running another process. Chroot is to cgroups whatIamtoThe Hulk,BaneandVenomcombined. Union filesystems An union filesystem allows a layered accumulation of changes through an union mount. In an union filesystem several filesystems can be mounted on top of each other, the result is a layered collection of changes. Each filesystem mounted represents a collection of changes to the previous filesystem, like a diff. When you download an image, modify it, and store your new version, you’ve just made a new union filesystem to be mounted on top of the initial layers that conformed your base image. This makes Docker images very light, for example: your DB, Nginx and Syslog images can all share the same Ubuntu base, each one storing only the changes from this base that they need to function. As of January 4th2015, Docker allows to use eitheraufs,btrfsordevice mapperfor union filesystems. Images Let me show you an image of postgresql: [{ "AppArmorProfile": "", "Args": [ "postgres" ], "Config": { "AttachStderr": true, "AttachStdin": false, "AttachStdout": true, "Cmd": [ "postgres" ], "CpuShares": 0, "Cpuset": "", "Domainname": "", "Entrypoint": [ "/docker-entrypoint.sh" ], "Env": [ "PATH=/usr/lib/postgresql/9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "LANG=en_US.utf8", "PG_MAJOR=9.3", "PG_VERSION=9.3.5-1.pgdg70+1", "PGDATA=/var/lib/postgresql/data" ], "ExposedPorts": { "5432/tcp": {} }, "Hostname": "6334a2022f21", "Image": "postgres", "MacAddress": "", "Memory": 0, "MemorySwap": 0, "NetworkDisabled": false, "OnBuild": null, "OpenStdin": false, "PortSpecs": null, "StdinOnce": false, "Tty": false, "User": "", "Volumes": { "/var/lib/postgresql/data": {} }, "WorkingDir": "" }, "Created": "2015-01-03T23:56:12.354896658Z", "Driver": "devicemapper", "ExecDriver": "native-0.2", "HostConfig": { "Binds": null, "CapAdd": null, "CapDrop": null, "ContainerIDFile": "", "Devices": null, "Dns": null, "DnsSearch": null, "ExtraHosts": null, "IpcMode": "", "Links": null, "LxcConf": null, "NetworkMode": "", "PortBindings": null, "Privileged": false, "PublishAllPorts": false, "RestartPolicy": { "MaximumRetryCount": 0, "Name": "" }, "SecurityOpt": null, "VolumesFrom": [ "bestwebappever.dev.db-data" ] }, "HostnamePath": "/mnt/docker/containers/6334a2022f213f9534b45df33c64437081a38d50c7f462692b019185b8cbc6da/hostname", "HostsPath": "/mnt/docker/containers/6334a2022f213f9534b45df33c64437081a38d50c7f462692b019185b8cbc6da/hosts", "Id": "6334a2022f213f9534b45df33c64437081a38d50c7f462692b019185b8cbc6da", "Image": "aaab661c1e3e8da2d9fc6872986cbd7b9ec835dcd3886d37722f1133baa3d2db", "MountLabel": "", "Name": "/bestwebappever.dev.db", "NetworkSettings": { "Bridge": "docker0", "Gateway": "172.17.42.1", "IPAddress": "172.17.0.176", "IPPrefixLen": 16, "MacAddress": "02:42:ac:11:00:b0", "PortMapping": null, "Ports": { "5432/tcp": null } }, "Path": "/docker-entrypoint.sh", "ProcessLabel": "", "ResolvConfPath": "/mnt/docker/containers/6334a2022f213f9534b45df33c64437081a38d50c7f462692b019185b8cbc6da/resolv.conf", "State": { "Error": "", "ExitCode": 0, "FinishedAt": "0001-01-01T00:00:00Z", "OOMKilled": false, "Paused": false, "Pid": 21654, "Restarting": false, "Running": true, "StartedAt": "2015-01-03T23:56:42.003405983Z" }, "Volumes": { "/var/lib/postgresql/data": "/mnt/docker/vfs/dir/5ac73c52ca86600a82e61279346dac0cb3e173b067ba9b219ea044023ca67561", "postgresql_data": "/mnt/docker/vfs/dir/abace588b890e9f4adb604f633c280b9b5bed7d20285aac9cc81a84a2f556034" }, "VolumesRW": { "/var/lib/postgresql/data": true, "postgresql_data": true } } ] Thats it, images are just a json that specifies the characteristic of the containers that will be run from that image, where the union mount is stored, what ports are exposed, etc. Each image is associated with one union filesystem, each union filesystem on Docker has a parent, so images have a hierarchy. Several Docker images can be created from a same base, but each image may only haveone parent, just like a computer science tree (unlike some other trees that have a bigger family group).Don't worry if it looks daunting or some things don't quite add up, you'll not be handling these files directly, this is for educational purposes only. Containers The reason containers are ephemeral is that, when you create a container from an image, Docker creates a blank union filesystem to be mounted on top of the union filesystem associated to that image. Since the union filesystem is blank it means no changes are applied to the image's filesystem, when youcreate some change it gets reflected, but when the container is stopped the union filesystem of that container is discarded, leaving you with the original image'sfilesystem you started with. Unless you create a new image, or make a volume, your changes will always disappear on container stop. What volumes do is to specify a directory within the container that will be stored it outside the union filesystem. Here is a container for thebestwebappever: [{ "AppArmorProfile": "", "Args": [], "Config": { "AttachStderr": true, "AttachStdin": false, "AttachStdout": true, "Cmd": [ "/sbin/my_init" ], "CpuShares": 0, "Cpuset": "", "Domainname": "", "Entrypoint": null, "Env": [ "DJANGO_CONFIGURATION=Local", "HOME=/root", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TALPOR_ENVIRONMENT=local", "TALPOR_DIR=/opt/bestwebappever" ], "ExposedPorts": { "80/tcp": {} }, "Hostname": "44a87fdaf870", "Image": "talpor/bestwebappever:dev", "MacAddress": "", "Memory": 0, "MemorySwap": 0, "NetworkDisabled": false, "OnBuild": null, "OpenStdin": false, "PortSpecs": null, "StdinOnce": false, "Tty": false, "User": "", "Volumes": { "/opt/bestwebappever": {} }, "WorkingDir": "/opt/bestwebappever" }, "Created": "2015-01-03T23:56:15.378511619Z", "Driver": "devicemapper", "ExecDriver": "native-0.2", "HostConfig": { "Binds": [ "/home/german/bestwebappever/:/opt/bestwebappever:rw" ], "CapAdd": null, "CapDrop": null, "ContainerIDFile": "", "Devices": null, "Dns": null, "DnsSearch": null, "ExtraHosts": null, "IpcMode": "", "Links": [ "/bestwebappever.dev.db:/bestwebappever.dev.app/db", "/bestwebappever.dev.redis:/bestwebappever.dev.app/redis" ], "LxcConf": null, "NetworkMode": "", "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "8887" } ] }, "Privileged": false, "PublishAllPorts": false, "RestartPolicy": { "MaximumRetryCount": 0, "Name": "" }, "SecurityOpt": null, "VolumesFrom": [ "bestwebappever.dev.requirements-data" ] }, "HostnamePath": "/mnt/docker/containers/44a87fdaf870281e86160e9e844b8987cfefd771448887675fed99460de491c4/hostname", "HostsPath": "/mnt/docker/containers/44a87fdaf870281e86160e9e844b8987cfefd771448887675fed99460de491c4/hosts", "Id": "44a87fdaf870281e86160e9e844b8987cfefd771448887675fed99460de491c4", "Image": "b84804fac17b61fe8f344359285186f1a63cd8c0017930897a078cd09d61bb60", "MountLabel": "", "Name": "/bestwebappever.dev.app", "NetworkSettings": { "Bridge": "docker0", "Gateway": "172.17.42.1", "IPAddress": "172.17.0.179", "IPPrefixLen": 16, "MacAddress": "02:42:ac:11:00:b3", "PortMapping": null, "Ports": { "80/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8887" } ] } }, "Path": "/sbin/my_init", "ProcessLabel": "", "ResolvConfPath": "/mnt/docker/containers/44a87fdaf870281e86160e9e844b8987cfefd771448887675fed99460de491c4/resolv.conf", "State": { "Error": "", "ExitCode": 0, "FinishedAt": "0001-01-01T00:00:00Z", "OOMKilled": false, "Paused": false, "Pid": 21796, "Restarting": false, "Running": true, "StartedAt": "2015-01-03T23:56:47.537259546Z" }, "Volumes": { "/opt/bestwebappever": "/home/german/bestwebappever", "requirements_data": "/mnt/docker/vfs/dir/bc14bec26ca311d5ed9f2a83eebef872a879c9e2f1d932470e0fd853fe8be336" }, "VolumesRW": { "/opt/bestwebappever": true, "requirements_data": true } } ] Basically the same as an image, but now some exported ports to the host are also specified, where volumes are located on the host is also stated, the container state is present towards the end, etc. As before, don't worry if it looks daunting, you will not be handling these jsondirectly. Tiny and small and puny step-by-step So, step 1.Install Docker. The Docker cmd utilitiesneedroot permissions to work. You may include your user in the docker group to avoid having to sudo everything. Step two, lets download an image from thepublic registryusing the following command: $> docker pull ubuntu:latest ubuntu:latest: The image you are pulling has been verified 3b363fd9d7da: Pull complete .....<bunch of downloading-stuff output>..... 8eaa4ff06b53: Pull complete Status: Downloaded newer image for ubuntu:latest $> There are images for pretty much everything you may need on this public registry: Ubuntu, Fedora, Postgresql, MySQL, Jenkins, Elasticsearch, Redis, etc. The Docker developers maintain several images in the public registry, but the bulk of what you can pull from itcome from users that publishtheir own creations. There may be come a time when you need/want a private registry (for containers for developing apps and such),you should read this first.Now,there arewaysto setupyour ownprivateregistry.You could alsojust pay for one. Step three, list your images: $> docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE ubuntu latest 8eaa4ff06b53 4 days ago 192.7 MB Step four, create a container from that image. $> docker run --rm -ti ubuntu /bin/bash root@4638a40c2fbb:/# ls bin boot dev etc home lib lib64 media mnt opt proc root...... root@4638a40c2fbb:/# exit Quick rundown of what youdid on that last command: --rm: tells Docker to remove the container as soon as the process is running exits. Good for making tests and avoiding clutter -ti: tell Docker to allocate a pseudo tty and put me on interactive mode. This is for entering the container and is good for rapid prototyping and playing around, but for production containers you will not be turning these flags on ubuntu: this is the image we're basing the container on /bin/bash: the command to run, and since we started on interactive mode, it gives us a prompt to the container On the run command you specify your links, volumes, ports, name of the container (Docker assings default name if you do not provide one) etc. Now let's run a container on the background: $> docker run -d ubuntu ping 8.8.8.8 31c68e9c09a0d632caae40debe13da3d6e612364198e2ef21f842762df4f987f $> The output is the assigned ID, yours will vary as it is random. Let's check out what our container is up to: $> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 31c68e9c09a0 ubuntu:latest "ping 8.8.8.8" 2 minutes ago Up 2 minutes loving_mcclintock There he is, his automated assigned human-readable name is loving_mcclintock. Now lets check inside the container to see what's happening: $> docker exec -ti loving_mcclintock /bin/bash root@31c68e9c09a0:/# ps -aux|grep ping root 1 0.0 0.0 6504 636 ? Ss 20:46 0:00 ping 8.8.8.8 root@31c68e9c09a0:/# exit What we just did is to execute a programinside the container, in this case the program was /bin/bash. The flags -ti servesthe same purpose as in docker run, so it just placed us inside a shell in the container. Wrap up This about wraps it up. There is so much more to cover, but that's beyond the scope of this blogpost. I'll however leave you with some links and further reading material that Ibelieveis important/interesting Docker basic structure: https://docs.docker.com/introduction/understanding-docker/ http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/ Further reading: Dockerfiles: These allow you to define an image using a text file, they are really important Did I mentioned thatdockerfilesare very important? You really should check outdockerfiles docker build: you need this to build yourdockerfiles docker push/docker pull docker create/docker run docker rm/docker rmi docker start/docker stop docker exec docker inspect docker tag Links Volumes Interesting links: ANNOUNCING DOCKER MACHINE, SWARM, AND COMPOSE FOR ORCHESTRATING DISTRIBUTED APPS Docker at Shopify: How we built containers that power over 100,000 online shops Why CoreOS is a game-changer in the data center and cloud Docker Misconceptions Microservices - Not a Free Lunch! Feature Preview: Docker-Based Development Environments How to compile Docker on Windows(thanks tocomputermedicon reddit for the link) Useful projects and links Phusion Docker baseimage Shipyard DockerUI CoreOS Decking Docker-py Docker-map Docker-fabric

优秀的个人博客,低调大师

语音朗读入门

利用微软自带的TTS,能够做到一个简单的语音朗读功能 using System.Speech.Synthesis; //用于语音朗读 using System.Speech.Recognition;//用于识别语音 namespace ConsoleApplication1 { class Program { static void Main(string[] args) { while (true) { Console.WriteLine("请输入英文:"); string s = Console.ReadLine(); SpeechSynthesizer synth = new SpeechSynthesizer(); //选择不同的发音 synth.SelectVoice("Microsoft Anna");//美式发音,但只能读英文 //synth.SelectVoice("Microsoft Lili");//能读中英文 synth.Speak(s); } //语音识别 //SpeechRecognitionEngine sre = new SpeechRecognitionEngine(); } } }

优秀的个人博客,低调大师

《Java从入门到放弃》JavaSE入门篇:程序结构

程序的结构一般分为三种: 顺序结构。 选择结构。 循环结构。 一、顺序结构:这个不用多说吧,跟我们平时写文章的顺序一样,从上往下。 二、选择结构:从名字就能看出,要选择嘛,到底是要漂亮滴妹子,还是要有钱滴妹子呢!当然,如果是个吊丝码农滴话,那你就不要多想了,还是老老实实码代码吧··· 三、循环结构:循环啊,就是一直转啊转啊转啊,转到出意外为止。 接下来还是老规矩,通过小示例来学习语法吧。 顺序结构: 一、输入姓名和年龄,输出自我介绍。 publicstaticvoidmain(String[]args){ //创建输入数据的对象,具体什么叫对象···,先不用管吧 Scannerinput=newScanner(System.in); Stringname;//姓名 intage;//年龄 System.out.print("请输入姓名:"); name=input.next(); System.out.print("请输入年龄:"); age=input.nextInt(); System.out.println("大家好,我叫"+name+",今年"+age+"岁,请多关照。"); } 注意: 这就是一个标准的顺序结构,代码会从上往下执行,如果你把int age;这句话放到最后,那就会在age = input.nextInt();这一行报语法错误了。 结果: 选择结构: 选择结构的语法有四种,我们通过下面的案例来了解。 一、输入你的存款,如果大于5000则加上“壕”的头衔。 publicstaticvoidmain(String[]args){ //创建输入数据的对象,具体什么叫对象···,先不用管吧 Scannerinput=newScanner(System.in); Stringname;//姓名 intdeposit;//存款 System.out.print("请输入姓名:"); name=input.next(); System.out.print("请输入存款:"); deposit=input.nextInt(); System.out.print("大家好,我叫"+name); if(deposit>5000){ System.out.print("(壕)"); } System.out.println("。"); } 语法: if( 条件 ){ 要执行的代码 } 注意: 条件成立时会执行{}中的所有代码。 结果: 二、输入你的存款,如果大于5000则加上“壕”的头衔,否则加上“穷13”的头衔。 if(deposit>5000){ System.out.print("(壕)"); } else{ System.out.print("(穷13)"); } 注意:别的代码不用动,只需要在if(){}结构上加上else{}就OK了,else{}表示条件不成立时的执行代码。 结果: 三、输入你的存款,如果大于50000,则加上“神壕”的头衔,否则如果大于30000,则加上“金壕”的头衔,否则如果大于10000,则加上“壕”的头衔,否则加上“穷13”的头衔。 if(deposit>50000){ System.out.print("(神壕)"); } elseif(deposit>30000){ System.out.print("(金壕)"); } elseif(deposit>10000){ System.out.print("(壕)"); } else{ System.out.print("(穷13)"); } 注意:这种语法叫多分支选择结构(一般用于选择情况大于2的场合,比如演唱会门票的级别、你的女神的胸肌是A还是BCDEF等)。 结果就不展示了。 四、查询余额请按1,套餐更改请按2,宽带业务请按3,企业业务请按4,人工服务请按5,其它业务请瞎按. publicstaticvoidmain(String[]args){ //创建输入数据的对象,具体什么叫对象···,先不用管吧 Scannerinput=newScanner(System.in); intnum; System.out.print("1.查询余额请按1," +"\n2.套餐更改请按2" +"\n3.宽带业务请按3" +"\n4.企业业务请按4" +"\n5.人工服务请按5" +"\n6.其它业务请瞎按" +"\n请选择:"); num=input.nextInt(); switch(num){ case1: System.out.println("您的余额为0。");break; case2: System.out.println("改完了,请回吧。");break; case3: System.out.println("宽带装好了,请交钱1998¥。");break; case4: System.out.println("请上传企业注册资料。");break; case5: System.out.println("我们正在招聘服务人员,请稍等...");break; default: System.out.println("乱按好玩吧...");break; } } 注意: switch语法一般用于数值类型和布尔类型等值判断的场合,最新版的JDK支持String类型了。小伙伴们可以自己试试。 每个case后的语句执行完后都有个小尾巴(break;),表示从这儿退出switch结构,大家可以把这个小尾巴去掉看看结果有什么不一样。 结果: 循环结构: 循环结构常用的有四种:while、do...while、for、foreach(后面讲集合时再介绍)。 对应的语法都很简单,我们通过求100以内的奇数和来了解一下,后面再来分析一个经典案例。 publicstaticvoidmain(String[]args){ //求100以内的奇数和 //1.while循环 inti=1;//循环的初值 ints=0;//保存和 while(i<=100){//循环的条件 s+=i;//循环的内容 i+=2;//循环的步长(也就是循环变量的值如何变化) } System.out.println("while循环结果:"+s); //变量值还原 i=1; s=0; //2.do...while循环 do{ s+=i; i+=2; }while(i<=100); System.out.println("do...while循环结果:"+s); //变量值还原 s=0; //3.for循环 for(i=1;i<=100;i+=2){ s+=i; } System.out.println("for循环结果:"+s); } 执行结果: 从上面的语法应该可以看出,循环主要有四个点:初值、条件、步长、循环体(内容)。那么这三种循环的语法有什么区别呢? while循环:侧重于不确定循环次数的场合,先判断,如果条件成立则进入循环。 do...while循环:侧重于不确定循环次数的场合,先执行一次,之后如果条件成立则时入循环。 for循环:侧重于确定循环次数的场合。 与循环配合使用的还有两个关键字:continue和break; 他们的作用,看代码和结果吧: publicstaticvoidmain(String[]args){ for(inti=0;i<10;i++){ if(i==5){ continue; } System.out.print(i+","); } System.out.println("\n======================="); for(inti=0;i<10;i++){ if(i==5){ break; } System.out.print(i+","); } } 结果: 注意到两个的区别了吧 运行continue后,就不再执行循环里面continue后面的代码,直接运行i++去了。 而运行break后,则直接跳出了循环,后面的都不执行了。 经典案例:登录功能,如果账号密码输入正确则登录成功,否则请再次输入密码。 分析: 登录时要输入几次账号密码?很明显不知道啊!!!所以for循环被排除了。 然后再判断,是要先输入账号密码后判断,还是先判断后再输入账号密码呢? 这也很明显,要先输入了才需要判断啊!!!所以while循环也被排除了。 最后就剩下do...while循环了。代码如下: publicstaticvoidmain(String[]args){ //模拟登录功能 //分析过程: //1.定义保存账号和密码的变量 Scannerinput=newScanner(System.in); StringinLoginID; StringinLoginPWD; //2.因为还没学习数据库,所以定义两个变量保存正确的账号和密码 StringloginID="liergou"; StringloginPWD="haha250"; //3.输入账号和密码 do{ System.out.print("请输入账号:"); inLoginID=input.next(); System.out.print("请输入密码:"); inLoginPWD=input.next(); //4.判断输入的账号和密码与正确的是否相同(判断字符串是否相等使用equals方法), //如果相同等提示登录成功,循环结束,否则提示重新输入 if(inLoginID.equals(loginID)&&inLoginPWD.equals(loginPWD)){ System.out.println("登录成功!"); break; } else{ System.out.println("账号和密码不匹配,请重新输入!"); } }while(true); } 如果你非要使用while和for,那··············当然也是可以滴,只不过代码复杂度会上升,特别是使用for来写的话会很奇怪,大伙可以看看: 这是使用while的写法 publicstaticvoidmain(String[]args){ //模拟登录功能 //分析过程: //1.定义保存账号和密码的变量 Scannerinput=newScanner(System.in); StringinLoginID; StringinLoginPWD; //2.因为还没学习数据库,所以定义两个变量保存正确的账号和密码 StringloginID="liergou"; StringloginPWD="haha250"; //3.输入账号和密码 System.out.print("请输入账号:"); inLoginID=input.next(); System.out.print("请输入密码:"); inLoginPWD=input.next(); while(true){ //4.判断输入的账号和密码与正确的是否相同(判断字符串是否相等使用equals方法), //如果相同等提示登录成功,循环结束,否则提示重新输入 if(inLoginID.equals(loginID)&&inLoginPWD.equals(loginPWD)){ System.out.println("登录成功!"); break; } else{ System.out.println("账号和密码不匹配,请重新输入!"); //下面的代码重复了 System.out.print("请输入账号:"); inLoginID=input.next(); System.out.print("请输入密码:"); inLoginPWD=input.next(); } } } 下面是使用for的写法 for(;true;){ //4.判断输入的账号和密码与正确的是否相同(判断字符串是否相等使用equals方法), //如果相同等提示登录成功,循环结束,否则提示重新输入 if(inLoginID.equals(loginID)&&inLoginPWD.equals(loginPWD)){ System.out.println("登录成功!"); break; } else{ System.out.println("账号和密码不匹配,请重新输入!"); //下面的代码重复了 System.out.print("请输入账号:"); inLoginID=input.next(); System.out.print("请输入密码:"); inLoginPWD=input.next(); } } 最后,再布置几个练习,各位看官自己分析并练习练习吧,看具体使用哪种循环最好。 1.打印出所有的"水仙花数",所谓"水仙花数"是指一个三位数,其各位数字立方和等于该数本身。例如:153是一个"水仙花数",因为153=1的三次方+5的三次方+3的三次方。 2.将一个正整数分解质因数。例如:输入90,打印出90=2*3*3*5。 3. 球从100M高度自由落下,每次落地后反跳回原高度的一半,再落下,求它在第10次落地时,共经过多少M?第10次反弹多高? 4. 任意输入一个整数(小于6位),求它的位数询问 5. “我爱你,嫁给我吧?”,选择“你喜欢我吗?(y/n):",如果输入为y则打印”我们形影不离“,若输入为n,则继续询问 如果有不确定答案的练习,就在评论里讨论吧··· “软件思维”博客地址:51CTO感兴趣的小伙伴可以去看相关的其它博文。

优秀的个人博客,低调大师

快速建站服务 - 零门槛三分钟快速建站

阿里云云市场-精心打造明星建站产品 面对日益增多的创业公司与中小企业,拥有属于自己官网的需求十分迫切。 本次教程使用商品: 【1元/天建站】PC+手机+微信网站建设,百套精美样式,可视化简单操作 【1元/天建站】是中小企业建站首选。后台功能强大,操作简单,完全可以用SAAS平台做出定制站效果 优惠价¥3/月 立即购买 本次教程使用商品:1、修改网站内容:2、域名解析 1.登录阿里云---进入控制台 2.点击左侧“域名”---在需要解析的域名后面点击“解析”(注意:操作解析前,请先完成“实名认证”,否则会有解析不通过的情况) 3.在需要解析的域名后面点击“解析” 4.点击进入高级设置 5.点击“添加解析”,为提升网站稳定性,建议您做两条CNAME记录到后台提供的解析地址。 两条CNAME记录:"主机记录"=WWW,记录值=“解析CNAME” "主机记录"=@, 记录值=“解析CNAME” 提示:CNAME记录在后台“域名管理”---“解析与备案”查看。 如解析时提示有冲突,建议修改为:一条CNAME和一条A记录:主机记录=WWW,记录值=“解析CNAME” 主机记录=@,记录值=“解析IP” 提示:A记录(解析IP)在后台“域名管理”---“解析与备案”查看。 3、域名绑定域名绑定 1.进入后台--域名管理--点击“新增域名” 2.输入域名--点击确定 3.域名绑定成功! 4、网站发布 1.进入设计页面--点击发布(也可在后台管理界面点击“发布”) 2.确认域名后点击“发布” 网站发布成功!

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

WebStorm

WebStorm

WebStorm 是jetbrains公司旗下一款JavaScript 开发工具。目前已经被广大中国JS开发者誉为“Web前端开发神器”、“最强大的HTML5编辑器”、“最智能的JavaScript IDE”等。与IntelliJ IDEA同源,继承了IntelliJ IDEA强大的JS部分的功能。

用户登录
用户注册