首页 文章 精选 留言 我的

精选列表

搜索[网站开发],共10000篇文章
优秀的个人博客,低调大师

手把手教你制作.a静态库(iOS开发

知识普及: 什么是库? 库是程序代码的集合,是共享程序代码的一种方式 根据源代码的公开情况,库可以分为 2 种类型 开源库 公开源代码,能看到具体实现 比如 SDWebImage 、 AFNetworking 闭源库 不公开源代码,是经过编译后的二进制文件,看不到具体实现 主要分为:静态库、动态库 静态库和动态库 静态库和动态库的存在形式 静态库: .a 和 .framework 动态库: .dylib 和 .framework 静态库和动态库在使用上的区别 静态库:链接时,静态库会被完整地复制到可执行文件中, 被多次使用就有多份冗余拷贝 (左图所示) 动态库:链接时不复制,程序运行时由系统动态加载到内存,供程序调用,系统只加载一次,多个程序共用,节省内存 (右图所示) 需要注意的是:项目中如果使用了自制的动态库,不能被上传到 AppStore! 制作 .a 1、新建项目-> 选择 “Cocoa Touch Static Library” 2、添加库需要包含的源代码,将你工程里的代码添加到打静态库工程里: 3、配置一下工程: 4、选择需要暴露出来的 .h 文件, .m 文件会自动编译到 .a 文件中: 需要暴漏出的文件类在这里设置: 引入的网络框架设置:(有些会自动引入,有些需要手动引入,根据编译报错,可以检查。) 5、编译前检查一下是debug模式还是release模式,选择release模式: 分debug和release模式: Debug-iphoneos 文件夹里面的东西是用在真机上的 Debug-iphonesimulator 文件夹里面的东西是用在模拟器上的 如果 Scheme 是 Release 模式,生成的文件夹就以 Release 开头 6、选择真机设备,然后 Command+B 编译, libSPCustomerServerse.a 文件从红色变为黑色 (注意建议先选择真机,这样才会变色,如果先选择模拟器,那么编译通过了,但是不会由红变黑,实际上是已经编译了模拟器的,只是给人一种假象,好像没编译一样,还是红色的) 7、选择模拟器,建议选择iPhone6以上的(选择高级点的,低级的也可用,若选了低级的,高级的有可能就不适配了)然后依然 Command+B 编译,模拟器和真机环境下用的 .a 文件是分开的。 8、合并模拟器(release-iphonesimulator)和真机(release-iphoneos)下的.a,以便支持真机和模拟器,合并后的.a大小大约是不合并的2倍左右。 show in finder 合并步骤: 举例:1、新建文件夹:"dabao"; 2、将上述的release-iphonesimulator和release-iphoneos两个文件拷贝进"dabao"文件夹。 3、打开终端,执行以下操作: 一、在终端输入:lipo -create 二、将release-iphonesimulator下的.a拖进终端,输入一个空格; 三、继续将release-iphoneos 下.a 拖进终端,输入空格; 四、继续输入:-output ,打个空格; 五、输入合并后的.a所要放的文件路径(这里举例依然放在“dabao”这个文件夹下,则我的电脑的路径为:/Users/ntalker-zhou/Desktop/dabao/libSPCustomerServerseSDK.a),回车,在相应的文件下即可生成一个.a,该.a即是合并后的.a 4、检查合并后的.a是不是满足所有要求,依然在终端进行以下操作: 一、在终端输入:lipo -info ; 二、将合并后的.a拖进终端 ,回车; (注意:终端每一步记得要用空格隔开,否则会出错哦!) 这样.a 就成功制作出了,只需要将暴漏的头文件以及制作的.a放入工程,别人就可以使用你的程序了,再也不用担心源码暴漏或别人随意修改你的代码了,不过如果有图片资源,图片资源是不能打进.a的,需要在外面添加的哦。 (有时候为了编译通过,根据需要,需要设置一下工程配置比如我需要配置xml等… 文/哇哇卡(简书作者) 原文链接:http://www.jianshu.com/p/a1dc024a8a15# 著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。 本文转自ljianbing51CTO博客,原文链接:http://blog.51cto.com/ljianbing/1887229 ,如需转载请自行联系原作者

优秀的个人博客,低调大师

iOS开发之runtime精准获取电池电量

方法一:通过苹果官方文档里面UIDevice public API来获取,代码如下: [UIDevice currentDevice].batteryMonitoringEnabled = YES; [[NSNotificationCenter defaultCenter] addObserverForName:UIDeviceBatteryLevelDidChangeNotification object:nil queue:[NSOperationQueue mainQueue] usingBlock:^(NSNotification *notification) { // Level has changed NSLog(@"Battery Level Change"); NSLog(@"电池电量:%.2f", [UIDevice currentDevice].batteryLevel); }];@property(nonatomic,readonly) float batteryLevel NS_AVAILABLE_IOS(3_0); // 0 .. 1.0. -1.0 if UIDeviceBatteryStateUnknown它返回的是0.00-1.00之间的浮点值。 但是经过测试发现,在iOS7 上 它是以0.05为单位的,但是在iOS9下测试,它是以0.01为单位的,虽然也是0.01为单位,但是测试多次也会出现偏差1%左右。也就是说, 这个办法是存在缺陷的, 最起码, 它不精确。 方法二:找到Mac下IOKit.framework,将IOKit.framework里面的IOPowerSources.h和IOPSKeys.h拷贝到你的iOS项目中。另外, 还需要把IOKit也导入到你的工程中去,此方法也会出现偏差,不精确。DEMO 地址:https://github.com/colin1994/batteryLevelTest.git /** *Calculatingtheremainingenergy * *@returnCurrentbatterylevel */-(double)getCurrentBatteryLevel { //ReturnsablobofPowerSourceinformationinanopaqueCFTypeRef.CFTypeRefblob=IOPSCopyPowerSourcesInfo(); //ReturnsaCFArrayofPowerSourcehandles,eachoftypeCFTypeRef.CFArrayRefsources=IOPSCopyPowerSourcesList(blob); CFDictionaryRefpSource=NULL; constvoid*psValue; //Returnsthenumberofvaluescurrentlyinanarray.intnumOfSources=CFArrayGetCount(sources); //ErrorinCFArrayGetCountif(numOfSources==0) { NSLog(@"ErrorinCFArrayGetCount"); return-1.0f; } //Calculatingtheremainingenergyfor(inti=0;i<numOfSources;i++) { //ReturnsaCFDictionarywithreadableinformationaboutthespecificpowersource. pSource=IOPSGetPowerSourceDescription(blob,CFArrayGetValueAtIndex(sources,i)); if(!pSource) { NSLog(@"ErrorinIOPSGetPowerSourceDescription"); return-1.0f; } psValue=(CFStringRef)CFDictionaryGetValue(pSource,CFSTR(kIOPSNameKey)); intcurCapacity=0; intmaxCapacity=0; doublepercent; psValue=CFDictionaryGetValue(pSource,CFSTR(kIOPSCurrentCapacityKey)); CFNumberGetValue((CFNumberRef)psValue,kCFNumberSInt32Type,&curCapacity); psValue=CFDictionaryGetValue(pSource,CFSTR(kIOPSMaxCapacityKey)); CFNumberGetValue((CFNumberRef)psValue,kCFNumberSInt32Type,&maxCapacity); percent=((double)curCapacity/(double)maxCapacity*100.0f); returnpercent; } return-1.0f; } 方法三:通过runtime 获取StatusBar上电池电量控件类私有变量的值,此方法可精准获取iOS6以上电池电量 MRC: -(int)getCurrentBatteryLevel {if([UIApplicationsharedApplication].applicationState==UIApplicationStateActive||[UIApplicationsharedApplication].applicationState==UIApplicationStateInactive){void*result=nil; object_getInstanceVariable([UIApplicationsharedApplication],"_statusBar",&result);idstatus=result;for(idaviewin[statussubviews]){for(idbviewin[aviewsubviews]){intbatteryLevel=0;if([NSStringFromClass([bviewclass])caseInsensitiveCompare:@"UIStatusBarBatteryItemView"]==NSOrderedSame&&[[[UIDevicecurrentDevice]systemVersion]floatValue]>=6.0) { object_getInstanceVariable(bview,"_capacity",&result); batteryLevel=(int)result;NSLog(@"电池电量:%d",batteryLevel);if(batteryLevel>0&&batteryLevel<=100){returnbatteryLevel; }else{return0; } } } }return0; } ARC: -(int)getCurrentBatteryLevel {UIApplication*app=[UIApplicationsharedApplication];if(app.applicationState==UIApplicationStateActive||app.applicationState==UIApplicationStateInactive){ Ivarivar=class_getInstanceVariable([appclass],"_statusBar");idstatus=object_getIvar(app,ivar);for(idaviewin[statussubviews]){intbatteryLevel=0;for(idbviewin[aviewsubviews]){if([NSStringFromClass([bviewclass])caseInsensitiveCompare:@"UIStatusBarBatteryItemView"]==NSOrderedSame&&[[[UIDevicecurrentDevice]systemVersion]floatValue]>=6.0) { Ivarivar=class_getInstanceVariable([bviewclass],"_capacity");if(ivar) { batteryLevel=((int(*)(id,Ivar))object_getIvar)(bview,ivar);//这种方式也可以 /*ptrdiff_toffset=ivar_getOffset(ivar); unsignedchar*stuffBytes=(unsignedchar*)(__bridgevoid*)bview; batteryLevel=*((int*)(stuffBytes+offset));*/ NSLog(@"电池电量:%d",batteryLevel);if(batteryLevel>0&&batteryLevel<=100){returnbatteryLevel; }else{return0; } } } } } }return0; } 本文转自 卓行天下 51CTO博客,原文链接:http://blog.51cto.com/9951038/1831743,如需转载请自行联系原作者

优秀的个人博客,低调大师

iOS开发那些-如何打包iOS应用程序

我们把应用上传到App Store之前需要把编译的二进制文件和资源文件打成压缩包,压缩格式是zip。 首页找到编译到什么地方,这个很重要也不太好找,我们可以看看编译日志,找到其中的Create universal binary HelloWorld…的内容,然后展开内容如下: Create Universal Binary /Users/tonyguan/Library/Developer/Xcode/DerivedData/HelloWorld-fzvtlfsmygaqjleczypphenzabef/Build/Products/Release-iphoneos/HelloWorld.app/HelloWorld normal ”armv7 armv7s” cd ”/Users/tonyguan/Desktop/19.1.4 HelloWorld” setenv PATH ”/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin” lipo -create /Users/tonyguan/Library/Developer/Xcode/DerivedData/HelloWorld-fzvtlfsmygaqjleczypphenzabef/Build/Intermediates/HelloWorld.build/Release-iphoneos/HelloWorld.build/Objects-normal/armv7/HelloWorld /Users/tonyguan/Library/Developer/Xcode/DerivedData/HelloWorld-fzvtlfsmygaqjleczypphenzabef/Build/Intermediates/HelloWorld.build/Release-iphoneos/HelloWorld.build/Objects-normal/armv7s/HelloWorld -output /Users/tonyguan/Library/Developer/Xcode/DerivedData/HelloWorld-fzvtlfsmygaqjleczypphenzabef/Build/Products/Release-iphoneos/HelloWorld.app/HelloWorld 在最后日志-output之后就是应用编译之后的位置了,其中“/Users/tonyguan/Library/… /Products/Release-iphoneos/”是编译之后生成的目录,HelloWorld.app是包文件,HelloWorld是二进制文件。 包文件HelloWorld.app可以使用点击右键菜单“显示包内容”,其中HelloWorld文件是我们这个应用的二进制文件。其它的都是资源文件,包括图片、属性列表文件、nib和storyboardc文件,nib是编译之后的xib文件,storyboardc是编译之后的故事板文件等。 应用打包就是将HelloWorld.app包文件打包成为HelloWorld.zip,具体操作是右键点击HelloWorld.app包文件弹出菜单,选择压缩“HelloWorld”,这样就会在当前目录下生成HelloWorld.zip压缩文件了,请将这个文件保存好,我们会在下一节介绍,上传应用时候还会使用到。 本文转自 tony关东升 51CTO博客,原文链接:http://blog.51cto.com/tonyguan/1214979,如需转载请自行联系原作者

优秀的个人博客,低调大师

Android开发如何去除标题栏title(转)

去除标题栏title其实非常简单,他有两种方法,一种是在代码中添加,另一种是在AndroidManifest.xml中添加: 1、在代码中实现: 在此方法setContentView(R.layout.main)之前加入: requestWindowFeature(Window.FEATURE_NO_TITLE);标题栏就没有了。 2、在AndroidManifest.xml中实现: 注册Activity时加上如下的一句配置就可以实现。 <activityandroid:name=".Activity" android:theme="@android:style/Theme.NoTitleBar" ></activity> 转自:链接 本文转自SharkBin博客园博客,原文链接:http://www.cnblogs.com/SharkBin/p/5056806.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

android开发—Fragment中onCreateView()和onActivityCreated()的区别

在编写Fragment时,在onCreateView()方法中启动了一个初始化自定义View的方法 initView(),但是调试时就崩溃,打印日志发现是这里出了问题,就将这个方法放到了onActivityCreated()方法中启动,就没有再崩溃过,不明白为什么,查询API和资料后总结如下:书上的讲解是:onCreateView():每次创建、绘制该Fragment的View组件时回调该方法,Fragment将会显示该方法返回的View组件。onActivityCreated():当Fragment所在的Activity被启动完成后回调该方法。 API: 而上文出现的问题是这样的: ①静态的view不需要onActivityCreated ②保存view的状态的时候需要用onActivityCreated ③访问父activity的view层的时候需要在onActivityCreated 方法里面做 即如果view是静态的,那么没有必要在onActivityCreated 方法去调用,大多数的自定义的view,初始化时都需要一个context,而activity是context的子类,所以在onCreateView方法的时候非静态的view初始化调用可能出现异常,所以对于非静态的view,最好在onActivityCreated方法调用 参考了以下两篇博客:http://blog.csdn.net/u014449046/article/details/48572905http://blog.csdn.net/lxl403853563/article/details/49800231 本文转自 一点点征服 博客园博客,原文链接:http://www.cnblogs.com/ldq2016/p/5591988.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

windows 基于docker下的 spark 开发环境搭建

docker toolbox https://www.docker.com/products/docker-toolbox spark https://hub.docker.com/r/singularities/spark/~/dockerfile/ # start-hadoop-namenode # hadoop fs -mkdir /user # hadoop fs -mkdir /user/root/ # hadoop fs -put ./README.md /user/root # start-spark # start-spark worker [master] # spark-shell # spark-shell --master spark://a60b8c8f9653:7077 scala> val lines = sc.textFile("file:///usr/local/spark-2.1.0/README.md") scala> val lines = sc.textFile("hdfs:///usr/local/spark-2.1.0/README.md") lines: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark-2.1.0/README.md MapPartitionsRDD[1] at textFile at <console>:24 scala> lines.count() res0: Long = 104 scala> lines.saveAsTextFile("hdfs:///user/root/README2.md") // 保存到hdfs 本文转自 拖鞋崽 51CTO博客,原文链接:http://blog.51cto.com/1992mrwang/1895904

优秀的个人博客,低调大师

hadoop mapreduce开发实践之输出数据压缩

1、hadoop 输出数据压缩 1.1、为什么要压缩? 输出数据较大时,使用hadoop提供的压缩机制对数据进行压缩,可以指定压缩的方式。减少网络传输带宽和存储的消耗; 可以对map的输出进行压缩(map输出到reduce输入的过程,可以shuffle过程中网络传输的数据量) 可以对reduce的输出结果进行压缩(最终保存到hdfs上的数据,主要是减少占用HDFS存储) mapper和reduce程序都不需要更改,只需要在streaming程序运行中指定参数即可; -jobconf "mapred.compress.map.output=true" \ -jobconf "mapred.map.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec" \ -jobconf "mapred.output.compress=true" \ -jobconf "mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec" \ 1.2、 run_streaming程序 #!/bin/bash HADOOP_CMD="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/bin/hadoop" STREAM_JAR_PATH="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.13.0.jar" INPUT_FILE_PATH="/input/The_Man_of_Property" OUTPUT_FILE_PATH="/output/wordcount/CacheArchiveCompressFile" $HADOOP_CMD fs -rmr -skipTrash $OUTPUT_FILE_PATH $HADOOP_CMD jar $STREAM_JAR_PATH \ -input $INPUT_FILE_PATH \ -output $OUTPUT_FILE_PATH \ -jobconf "mapred.job.name=wordcount_wordwhite_cacheArchivefile_demo" \ -jobconf "mapred.compress.map.output=true" \ -jobconf "mapred.map.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec" \ -jobconf "mapred.output.compress=true" \ -jobconf "mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec" \ -mapper "python mapper.py WHF.gz" \ -reducer "python reducer.py" \ -cacheArchive "hdfs://localhost:9000/input/cachefile/wordwhite.tar.gz#WHF.gz" \ -file "./mapper.py" \ -file "./reducer.py" 1.3、 执行程序 $ chmod +x run_streaming_compress.sh $ ./run_streaming_compress.sh ... 中间输出省略 ... 18/02/02 10:51:50 INFO streaming.StreamJob: Output directory: /output/wordcount/CacheArchiveCompressFile 1.4、 查看结果 $ hadoop fs -ls /output/wordcount/CacheArchiveCompressFile Found 2 items -rw-r--r-- 1 hadoop supergroup 0 2018-02-02 10:51 /output/wordcount/CacheArchiveCompressFile/_SUCCESS -rw-r--r-- 1 hadoop supergroup 81 2018-02-02 10:51 /output/wordcount/CacheArchiveCompressFile/part-00000.gz $ hadoop fs -get /output/wordcount/CacheArchiveCompressFile/part-00000.gz ./ $ gunzip part-00000.gz $ cat part-00000 and 2573 had 1526 have 350 in 1694 or 253 the 5144 this 412 to 2782 2、hadoop streaming 语法参考 http://blog.51cto.com/balich/2065419 本文转自 巴利奇 51CTO博客,原文链接:http://blog.51cto.com/balich/2068046

优秀的个人博客,低调大师

hadoop mapreduce开发实践之本地文件分发by streaming

场景:程序运行所需要的文件、脚本程序或者配置文件不在hadoop集群上,则首先要将这些文件分发到hadoop集群上才可以进行计算; hadoop提供了自动分发文件也压缩包的功能,只需要在启动hadoop streaming作业的时候增加响应的配置参数(-file)即可实现。 在执行streaming程序时,使用 -file 选项指定需要分发的本地文件; 1、本地文件分发(-file) 1.1、需求:wordcount(只统计指定的单词【the,and,had】) 思路:在之前的Wordcount中,是统计了所有文本内单词的Wordcount,在此基础上修改程序,增加一个类似白名单的文本wordwhite记录只统计的单词;在编写mapper程序时候,如果从文本获取的单词只有在wordwhite中的单词在输出map,进而传给reduce;reducer程序不需要修改; 1.2、程序和文件 wordwhite (只统计的单词) $ vim wordwhite the and had mapper程序 $ vim mapper.py #!/usr/bin/env python import sys def read_wordwhite(file): word_set = set() with open(file, 'r') as fd: for line in fd: word = line.strip() word_set.add(word) return word_set def mapper(file_fd): word_set = read_wordwhite(file_fd) for line in sys.stdin: line = line.strip() words = line.split() for word in words: if word != "" and (word in word_set): print "%s\t%s" %(word, 1) if __name__ == "__main__": if sys.argv[1]: file_fd = sys.argv[1] mapper(file_fd) reducer程序 vim reducer.py #!/usr/bin/env python import sys def reducer(): current_word = None word_sum = 0 for line in sys.stdin: word_list = line.strip().split('\t') if len(word_list) < 2: continue word = word_list[0].strip() word_value = word_list[1].strip() if current_word == None: current_word = word if current_word != word: print "%s\t%s" %(current_word, str(word_sum)) current_word = word word_sum = 0 word_sum += int(word_value) print "%s\t%s" %(current_word, str(word_sum)) if __name__ == "__main__": reducer() run_streaming程序 $ vim runstreaming.sh #!/bin/bash HADOOP_CMD="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/bin/hadoop" STREAM_JAR_PATH="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.13.0.jar" INPUT_FILE_PATH="/input/The_Man_of_Property" OUTPUT_FILE_PATH="/output/wordcount/wordwhitetest" # $HADOOP_CMD jar $STREAM_JAR_PATH \ -input $INPUT_FILE_PATH \ -output $OUTPUT_FILE_PATH \ -mapper "python mapper.py wordwhite" \ -reducer "python reducer.py" \ -file ./mapper.py \ -file ./reducer.py \ -file ./wordwhite 执行程序 首先需要将测试的文件:The_Man_of_Property上传到hdfs,同时创建wordcount输出目录;$ hadoop fs -put ./The_Man_of_Property /input/ $ hadoop fs -mkdir /output/wordcount 注:本次hadoop环境是伪分布式,hadoop 2.6版本。 $ ./runstreaming.sh 18/01/26 13:30:27 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. packageJobJar: [./mapper.py, ./reducer.py, ./wordwhite, /tmp/hadoop-unjar7204532228900236640/] [] /tmp/streamjob7580948745512643345.jar tmpDir=null 18/01/26 13:30:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/01/26 13:30:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/01/26 13:30:31 INFO mapred.FileInputFormat: Total input paths to process : 1 18/01/26 13:30:31 INFO mapreduce.JobSubmitter: number of splits:2 18/01/26 13:30:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1516345010544_0008 18/01/26 13:30:32 INFO impl.YarnClientImpl: Submitted application application_1516345010544_0008 18/01/26 13:30:32 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1516345010544_0008/ 18/01/26 13:30:32 INFO mapreduce.Job: Running job: job_1516345010544_0008 18/01/26 13:30:40 INFO mapreduce.Job: Job job_1516345010544_0008 running in uber mode : false 18/01/26 13:30:40 INFO mapreduce.Job: map 0% reduce 0% 18/01/26 13:30:50 INFO mapreduce.Job: map 50% reduce 0% 18/01/26 13:30:51 INFO mapreduce.Job: map 100% reduce 0% 18/01/26 13:30:58 INFO mapreduce.Job: map 100% reduce 100% 18/01/26 13:30:59 INFO mapreduce.Job: Job job_1516345010544_0008 completed successfully 18/01/26 13:30:59 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=73950 FILE: Number of bytes written=582815 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=636501 HDFS: Number of bytes written=27 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=2 Launched reduce tasks=1 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=12815 Total time spent by all reduces in occupied slots (ms)=5251 Total time spent by all map tasks (ms)=12815 Total time spent by all reduce tasks (ms)=5251 Total vcore-milliseconds taken by all map tasks=12815 Total vcore-milliseconds taken by all reduce tasks=5251 Total megabyte-milliseconds taken by all map tasks=13122560 Total megabyte-milliseconds taken by all reduce tasks=5377024 Map-Reduce Framework Map input records=2866 Map output records=9243 Map output bytes=55458 Map output materialized bytes=73956 Input split bytes=198 Combine input records=0 Combine output records=0 Reduce input groups=3 Reduce shuffle bytes=73956 Reduce input records=9243 Reduce output records=3 Spilled Records=18486 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=332 CPU time spent (ms)=3700 Physical memory (bytes) snapshot=707719168 Virtual memory (bytes) snapshot=8333037568 Total committed heap usage (bytes)=598736896 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=636303 File Output Format Counters Bytes Written=27 18/01/26 13:30:59 INFO streaming.StreamJob: Output directory: /output/wordcount/wordwhitetest 查看结果$ hadoop fs -ls /output/wordcount/wordwhitetest/ Found 2 items -rw-r--r-- 1 centos supergroup 0 2018-01-26 13:30 /output/wordcount/wordwhitetest/_SUCCESS -rw-r--r-- 1 centos supergroup 27 2018-01-26 13:30 /output/wordcount/wordwhitetest/part-00000 $ hadoop fs -text /output/wordcount/wordwhitetest/part-00000 and 2573 had 1526 the 5144 以上就完成了指定单词的wordcount. 2、hadoop streaming 语法参考 http://blog.51cto.com/balich/2065419 本文转自 巴利奇 51CTO博客,原文链接:http://blog.51cto.com/balich/2065424

优秀的个人博客,低调大师

hadoop mapreduce开发实践之HDFS文件分发by streaming

1、分发HDFS文件(-cacheFile) 需求:wordcount(只统计指定的单词),但是该文件非常大,可以先将该文件上传到hdfs,通过-cacheFile的方式进行分发; -cachefile hdfs://host:port/path/to/file#linkname #选项在计算节点上缓存文件,streaming程序通过./linkname的方式访问文件。 思路:mapper和reducer程序都不需要修改,只是在运行streaming的时候需要使用-cacheFile 指定hdfs上的文件; 1.1、streaming命令格式 $HADOOP_HOME/bin/hadoop jar hadoop-streaming.jar \ -jobconf mapred.job.name="streaming_wordcount" \ -jobconf mapred.job.priority=3 \ -input /input/ \ -output /output/ \ -mapper "python mapper.py whc" \ -reducer "python reducer.py" \ -cacheFile "hdfs://master:9000/cache_file/wordwhite#whc" -file ./mapper.py \ -file ./reducer.py 注:-cacheFile "hdfs://master:9000/cache_file/wordwhite#whc"whc表示在hdfs上该文件的别名,在-mapper "python mapper.py whc"就如同使用本地文件一样。 1.2、上传wordwhite $ hadoop fs -mkdir /input/cachefile $ hadoop fs -put wordwhite /input/cachefile $ hadoop fs -ls /input/cachefile Found 1 items -rw-r--r-- 1 hadoop supergroup 12 2018-01-26 15:02 /input/cachefile/wordwhite $ hadoop fs -text hdfs://localhost:9000/input/cachefile/wordwhite the and had 1.3 run_streaming程序 mapper和reducer程序参考本地分发实例 $ vim runstreaming_cachefile.sh #!/bin/bash HADOOP_CMD="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/bin/hadoop" STREAM_JAR_PATH="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.13.0.jar" INPUT_FILE_PATH="/input/The_Man_of_Property" OUTPUT_FILE_PATH="/output/wordcount/wordwhitecachefiletest" $HADOOP_CMD jar $STREAM_JAR_PATH \ -input $INPUT_FILE_PATH \ -output $OUTPUT_FILE_PATH \ -jobconf "mapred.job.name=wordcount_wordwhite_cachefile_demo" \ -mapper "python mapper.py WHF" \ -reducer "python reducer.py" \ -cacheFile "hdfs://localhost:9000/input/cachefile/wordwhite#WHF" \ -file ./mapper.py \ -file ./reducer.py 1.4、执行程序 $ ./runstreaming_cachefile.sh 18/01/26 15:38:27 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. 18/01/26 15:38:28 WARN streaming.StreamJob: -cacheFile option is deprecated, please use -files instead. 18/01/26 15:38:28 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead. 18/01/26 15:38:28 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name packageJobJar: [./mapper.py, ./reducer.py, /tmp/hadoop-unjar1709565523181962236/] [] /tmp/streamjob6164905989972408041.jar tmpDir=null 18/01/26 15:38:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/01/26 15:38:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/01/26 15:38:31 INFO mapred.FileInputFormat: Total input paths to process : 1 18/01/26 15:38:31 INFO mapreduce.JobSubmitter: number of splits:2 18/01/26 15:38:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1516345010544_0012 18/01/26 15:38:32 INFO impl.YarnClientImpl: Submitted application application_1516345010544_0012 18/01/26 15:38:32 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1516345010544_0012/ 18/01/26 15:38:32 INFO mapreduce.Job: Running job: job_1516345010544_0012 18/01/26 15:38:40 INFO mapreduce.Job: Job job_1516345010544_0012 running in uber mode : false 18/01/26 15:38:40 INFO mapreduce.Job: map 0% reduce 0% 18/01/26 15:38:49 INFO mapreduce.Job: map 50% reduce 0% 18/01/26 15:38:50 INFO mapreduce.Job: map 100% reduce 0% 18/01/26 15:38:57 INFO mapreduce.Job: map 100% reduce 100% 18/01/26 15:38:57 INFO mapreduce.Job: Job job_1516345010544_0012 completed successfully 18/01/26 15:38:57 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=73950 FILE: Number of bytes written=582590 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=636501 HDFS: Number of bytes written=27 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=2 Launched reduce tasks=1 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=12921 Total time spent by all reduces in occupied slots (ms)=5641 Total time spent by all map tasks (ms)=12921 Total time spent by all reduce tasks (ms)=5641 Total vcore-milliseconds taken by all map tasks=12921 Total vcore-milliseconds taken by all reduce tasks=5641 Total megabyte-milliseconds taken by all map tasks=13231104 Total megabyte-milliseconds taken by all reduce tasks=5776384 Map-Reduce Framework Map input records=2866 Map output records=9243 Map output bytes=55458 Map output materialized bytes=73956 Input split bytes=198 Combine input records=0 Combine output records=0 Reduce input groups=3 Reduce shuffle bytes=73956 Reduce input records=9243 Reduce output records=3 Spilled Records=18486 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=360 CPU time spent (ms)=3910 Physical memory (bytes) snapshot=719896576 Virtual memory (bytes) snapshot=8331550720 Total committed heap usage (bytes)=602931200 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=636303 File Output Format Counters Bytes Written=27 18/01/26 15:38:57 INFO streaming.StreamJob: Output directory: /output/wordcount/wordwhitecachefiletest 1.5、查看结果 $ hadoop fs -ls /output/wordcount/wordwhitecachefiletest Found 2 items -rw-r--r-- 1 hadoop supergroup 0 2018-01-26 15:38 /output/wordcount/wordwhitecachefiletest/_SUCCESS -rw-r--r-- 1 hadoop supergroup 27 2018-01-26 15:38 /output/wordcount/wordwhitecachefiletest/part-00000 $ hadoop fs -text /output/wordcount/wordwhitecachefiletest/part-00000 and 2573 had 1526 the 5144 以上就完成了分发HDFS上的文件并指定单词的wordcount. 2、hadoop streaming 语法参考 http://blog.51cto.com/balich/2065419 本文转自 巴利奇 51CTO博客,原文链接:http://blog.51cto.com/balich/2065812

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册