首页 文章 精选 留言 我的

精选列表

搜索[API集成],共10000篇文章
优秀的个人博客,低调大师

Hadoop MapReduce编程 API入门系列之倒排索引(二十四)

2016-12-12 21:54:04,509 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Initializing JVM Metrics with processName=JobTracker, sessionId= 2016-12-12 21:54:05,166 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-12 21:54:05,169 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-12 21:54:05,477 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 3 2016-12-12 21:54:05,539 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:3 2016-12-12 21:54:05,810 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1000661716_0001 2016-12-12 21:54:06,184 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-12 21:54:06,185 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1000661716_0001 2016-12-12 21:54:06,193 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-12 21:54:06,220 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-12 21:54:06,297 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-12 21:54:06,314 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1000661716_0001_m_000000_0 2016-12-12 21:54:06,374 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 21:54:06,433 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6b4d160c 2016-12-12 21:54:06,441 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/data/inverseIndex/b.txt:0+35 2016-12-12 21:54:06,515 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-12 21:54:06,516 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-12 21:54:06,517 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-12 21:54:06,517 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-12 21:54:06,517 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-12 21:54:06,544 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-12 21:54:06,567 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-12 21:54:06,567 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-12 21:54:06,567 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-12 21:54:06,568 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 130; bufvoid = 104857600 2016-12-12 21:54:06,568 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600 2016-12-12 21:54:06,590 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-12 21:54:06,599 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1000661716_0001_m_000000_0 is done. And is in the process of committing 2016-12-12 21:54:06,631 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map 2016-12-12 21:54:06,631 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1000661716_0001_m_000000_0' done. 2016-12-12 21:54:06,631 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1000661716_0001_m_000000_0 2016-12-12 21:54:06,631 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1000661716_0001_m_000001_0 2016-12-12 21:54:06,637 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 21:54:06,687 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@418b04a5 2016-12-12 21:54:06,691 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/data/inverseIndex/a.txt:0+33 2016-12-12 21:54:06,742 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-12 21:54:06,742 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-12 21:54:06,742 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-12 21:54:06,742 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-12 21:54:06,743 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-12 21:54:06,744 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-12 21:54:06,747 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-12 21:54:06,748 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-12 21:54:06,748 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-12 21:54:06,748 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 128; bufvoid = 104857600 2016-12-12 21:54:06,748 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600 2016-12-12 21:54:06,756 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-12 21:54:06,761 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1000661716_0001_m_000001_0 is done. And is in the process of committing 2016-12-12 21:54:06,766 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map 2016-12-12 21:54:06,766 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1000661716_0001_m_000001_0' done. 2016-12-12 21:54:06,766 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1000661716_0001_m_000001_0 2016-12-12 21:54:06,766 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1000661716_0001_m_000002_0 2016-12-12 21:54:06,769 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 21:54:06,797 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@30616f6c 2016-12-12 21:54:06,800 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/data/inverseIndex/c.txt:0+22 2016-12-12 21:54:06,879 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-12 21:54:06,879 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-12 21:54:06,879 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-12 21:54:06,880 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-12 21:54:06,880 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-12 21:54:06,881 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-12 21:54:06,884 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-12 21:54:06,884 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-12 21:54:06,884 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-12 21:54:06,884 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 86; bufvoid = 104857600 2016-12-12 21:54:06,884 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/6553600 2016-12-12 21:54:06,891 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-12 21:54:06,895 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1000661716_0001_m_000002_0 is done. And is in the process of committing 2016-12-12 21:54:06,898 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map 2016-12-12 21:54:06,898 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1000661716_0001_m_000002_0' done. 2016-12-12 21:54:06,899 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1000661716_0001_m_000002_0 2016-12-12 21:54:06,899 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-12 21:54:06,903 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-12 21:54:06,903 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1000661716_0001_r_000000_0 2016-12-12 21:54:06,917 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 21:54:06,948 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@43234903 2016-12-12 21:54:06,954 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@a609d4 2016-12-12 21:54:06,979 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-12 21:54:06,996 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local1000661716_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-12 21:54:07,040 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#1 about to shuffle output of map attempt_local1000661716_0001_m_000000_0 decomp: 144 len: 148 to MEMORY 2016-12-12 21:54:07,052 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 144 bytes from map-output for attempt_local1000661716_0001_m_000000_0 2016-12-12 21:54:07,099 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 144, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->144 2016-12-12 21:54:07,103 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#1 about to shuffle output of map attempt_local1000661716_0001_m_000001_0 decomp: 142 len: 146 to MEMORY 2016-12-12 21:54:07,105 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 142 bytes from map-output for attempt_local1000661716_0001_m_000001_0 2016-12-12 21:54:07,105 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 142, inMemoryMapOutputs.size() -> 2, commitMemory -> 144, usedMemory ->286 2016-12-12 21:54:07,110 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#1 about to shuffle output of map attempt_local1000661716_0001_m_000002_0 decomp: 96 len: 100 to MEMORY 2016-12-12 21:54:07,112 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 96 bytes from map-output for attempt_local1000661716_0001_m_000002_0 2016-12-12 21:54:07,112 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 96, inMemoryMapOutputs.size() -> 3, commitMemory -> 286, usedMemory ->382 2016-12-12 21:54:07,113 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-12 21:54:07,114 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 3 / 3 copied. 2016-12-12 21:54:07,115 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs 2016-12-12 21:54:07,130 INFO [org.apache.hadoop.mapred.Merger] - Merging 3 sorted segments 2016-12-12 21:54:07,131 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 3 segments left of total size: 334 bytes 2016-12-12 21:54:07,133 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 3 segments, 382 bytes to disk to satisfy reduce memory limit 2016-12-12 21:54:07,133 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 382 bytes from disk 2016-12-12 21:54:07,134 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-12 21:54:07,134 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-12 21:54:07,136 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 362 bytes 2016-12-12 21:54:07,136 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 3 / 3 copied. 2016-12-12 21:54:07,144 INFO [org.apache.hadoop.conf.Configuration.deprecation] - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2016-12-12 21:54:07,163 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1000661716_0001_r_000000_0 is done. And is in the process of committing 2016-12-12 21:54:07,166 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 3 / 3 copied. 2016-12-12 21:54:07,166 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local1000661716_0001_r_000000_0 is allowed to commit now 2016-12-12 21:54:07,172 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local1000661716_0001_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/InverseIndexStepOne/_temporary/0/task_local1000661716_0001_r_000000 2016-12-12 21:54:07,173 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-12 21:54:07,173 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1000661716_0001_r_000000_0' done. 2016-12-12 21:54:07,174 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1000661716_0001_r_000000_0 2016-12-12 21:54:07,174 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-12 21:54:07,189 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1000661716_0001 running in uber mode : false 2016-12-12 21:54:07,191 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-12 21:54:07,193 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1000661716_0001 completed successfully 2016-12-12 21:54:07,223 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 33 File System Counters FILE: Number of bytes read=5146 FILE: Number of bytes written=777798 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=8 Map output records=16 Map output bytes=344 Map output materialized bytes=394 Input split bytes=396 Combine input records=0 Combine output records=0 Reduce input groups=9 Reduce shuffle bytes=394 Reduce input records=16 Reduce output records=9 Spilled Records=32 Shuffled Maps =3 Failed Shuffles=0 Merged Map outputs=3 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1460142080 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=90 File Output Format Counters Bytes Written=150 2016-12-12 21:55:03,523 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Initializing JVM Metrics with processName=JobTracker, sessionId= 2016-12-12 21:55:05,038 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-12 21:55:05,044 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-12 21:55:05,350 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-12 21:55:05,428 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-12 21:55:05,846 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local549789154_0001 2016-12-12 21:55:06,425 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-12 21:55:06,427 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local549789154_0001 2016-12-12 21:55:06,488 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-12 21:55:06,510 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-12 21:55:06,605 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-12 21:55:06,609 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local549789154_0001_m_000000_0 2016-12-12 21:55:06,691 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 21:55:06,728 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@550aaabb 2016-12-12 21:55:06,738 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/InverseIndexStepOne/part-r-00000:0+138 2016-12-12 21:55:06,821 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-12 21:55:06,821 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-12 21:55:06,821 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-12 21:55:06,821 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-12 21:55:06,821 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-12 21:55:06,828 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-12 21:55:06,851 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-12 21:55:06,852 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-12 21:55:06,852 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-12 21:55:06,852 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 138; bufvoid = 104857600 2016-12-12 21:55:06,852 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214364(104857456); length = 33/6553600 2016-12-12 21:55:06,882 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-12 21:55:06,895 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local549789154_0001_m_000000_0 is done. And is in the process of committing 2016-12-12 21:55:06,919 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map 2016-12-12 21:55:06,920 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local549789154_0001_m_000000_0' done. 2016-12-12 21:55:06,920 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local549789154_0001_m_000000_0 2016-12-12 21:55:06,921 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-12 21:55:06,927 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-12 21:55:06,928 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local549789154_0001_r_000000_0 2016-12-12 21:55:06,948 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 21:55:06,996 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1c50c5b8 2016-12-12 21:55:07,002 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@311e2a2d 2016-12-12 21:55:07,024 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-12 21:55:07,029 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local549789154_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-12 21:55:07,073 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#1 about to shuffle output of map attempt_local549789154_0001_m_000000_0 decomp: 158 len: 162 to MEMORY 2016-12-12 21:55:07,079 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 158 bytes from map-output for attempt_local549789154_0001_m_000000_0 2016-12-12 21:55:07,154 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 158, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->158 2016-12-12 21:55:07,156 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-12 21:55:07,157 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-12 21:55:07,158 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-12 21:55:07,173 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-12 21:55:07,173 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 150 bytes 2016-12-12 21:55:07,175 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 158 bytes to disk to satisfy reduce memory limit 2016-12-12 21:55:07,176 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 162 bytes from disk 2016-12-12 21:55:07,177 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-12 21:55:07,177 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-12 21:55:07,179 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 150 bytes 2016-12-12 21:55:07,180 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-12 21:55:07,188 INFO [org.apache.hadoop.conf.Configuration.deprecation] - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2016-12-12 21:55:07,202 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local549789154_0001_r_000000_0 is done. And is in the process of committing 2016-12-12 21:55:07,206 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-12 21:55:07,206 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local549789154_0001_r_000000_0 is allowed to commit now 2016-12-12 21:55:07,217 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local549789154_0001_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/InverseIndexStepTwo/_temporary/0/task_local549789154_0001_r_000000 2016-12-12 21:55:07,219 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-12 21:55:07,219 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local549789154_0001_r_000000_0' done. 2016-12-12 21:55:07,219 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local549789154_0001_r_000000_0 2016-12-12 21:55:07,223 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-12 21:55:07,431 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local549789154_0001 running in uber mode : false 2016-12-12 21:55:07,433 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-12 21:55:07,435 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local549789154_0001 completed successfully 2016-12-12 21:55:07,453 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 33 File System Counters FILE: Number of bytes read=1072 FILE: Number of bytes written=386015 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=9 Map output records=9 Map output bytes=138 Map output materialized bytes=162 Input split bytes=145 Combine input records=0 Combine output records=0 Reduce input groups=3 Reduce shuffle bytes=162 Reduce input records=9 Reduce output records=3 Spilled Records=18 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=466616320 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=158 File Output Format Counters Bytes Written=121 代码 package zhouls.bigdata.myMapReduce.InverseIndex; import java.io.IOException; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; /** * 倒排索引步骤一job * * */ public class InverseIndexStepOne { public static class StepOneMapper extends Mapper<LongWritable, Text, Text, LongWritable>{ @Override protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException { //拿到一行数据 String line = value.toString(); //切分出各个单词 String[] fields = StringUtils.split(line, " "); //获取这一行数据所在的文件切片 FileSplit inputSplit = (FileSplit) context.getInputSplit(); //从文件切片中获取文件名 String fileName = inputSplit.getPath().getName(); for(String field:fields){ //封装kv输出 , k : hello-->a.txt v: 1 context.write(new Text(field+"-->"+fileName), new LongWritable(1)); } } } public static class StepOneReducer extends Reducer<Text, LongWritable, Text, LongWritable>{ // <hello-->a.txt,{1,1,1....}> @Override protected void reduce(Text key, Iterable<LongWritable> values,Context context) throws IOException, InterruptedException { long counter = 0; for(LongWritable value:values){ counter += value.get(); } context.write(key, new LongWritable(counter)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf); job.setJarByClass(InverseIndexStepOne.class); job.setMapperClass(StepOneMapper.class); job.setReducerClass(StepOneReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(LongWritable.class); // FileInputFormat.setInputPaths(job, new Path("hdfs://HadoopMaster:9000/inverseIndex/")); // // //检查一下参数所指定的输出路径是否存在,如果已存在,先删除 // Path output = new Path("hdfs://HadoopMaster:9000/out/InverseIndexStepOne/"); // FileInputFormat.setInputPaths(job, new Path("./data/inverseIndex/")); //检查一下参数所指定的输出路径是否存在,如果已存在,先删除 Path output = new Path("./out/InverseIndexStepOne"); FileSystem fs = FileSystem.get(conf); if(fs.exists(output)){ fs.delete(output, true); } FileOutputFormat.setOutputPath(job, output); System.exit(job.waitForCompletion(true)?0:1); } } package zhouls.bigdata.myMapReduce.InverseIndex; import java.io.IOException; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Mapper.Context; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.Reducer; import zhouls.bigdata.myMapReduce.InverseIndex.InverseIndexStepOne.StepOneMapper; import zhouls.bigdata.myMapReduce.InverseIndex.InverseIndexStepOne.StepOneReducer; public class InverseIndexStepTwo { public static class StepTwoMapper extends Mapper<LongWritable, Text, Text, Text>{ //k: 行起始偏移量 v: {hello-->a.txt 3} @Override protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException { String line = value.toString(); String[] fields = StringUtils.split(line, "\t"); String[] wordAndfileName = StringUtils.split(fields[0], "-->"); String word = wordAndfileName[0]; String fileName = wordAndfileName[1]; long count = Long.parseLong(fields[1]); context.write(new Text(word), new Text(fileName+"-->"+count)); //map输出的结果是这个形式 : <hello,a.txt-->3> } } public static class StepTwoReducer extends Reducer<Text, Text,Text, Text>{ @Override protected void reduce(Text key, Iterable<Text> values,Context context) throws IOException, InterruptedException { //拿到的数据 <hello,{a.txt-->3,b.txt-->2,c.txt-->1}> String result = ""; for(Text value:values){ result += value + " "; } context.write(key, new Text(result)); //输出的结果就是 k: hello v: a.txt-->3 b.txt-->2 c.txt-->1 } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); //先构造job_one // Job job_one = Job.getInstance(conf); // // job_one.setJarByClass(InverseIndexStepTwo.class); // job_one.setMapperClass(StepOneMapper.class); // job_one.setReducerClass(StepOneReducer.class); //...... //构造job_two Job job_tow = Job.getInstance(conf); job_tow.setJarByClass(InverseIndexStepTwo.class); job_tow.setMapperClass(StepTwoMapper.class); job_tow.setReducerClass(StepTwoReducer.class); job_tow.setOutputKeyClass(Text.class); job_tow.setOutputValueClass(Text.class); // FileInputFormat.setInputPaths(job_tow, new Path("hdfs://HadoopMaster:9000/out/InverseIndexStepOne/")); // // //检查一下参数所指定的输出路径是否存在,如果已存在,先删除 // Path output = new Path("hdfs://HadoopMaster:9000/out/InverseIndexStepTwo/"); FileInputFormat.setInputPaths(job_tow, new Path("./out/InverseIndexStepOne")); //检查一下参数所指定的输出路径是否存在,如果已存在,先删除 Path output = new Path("./out/InverseIndexStepTwo"); FileSystem fs = FileSystem.get(conf); if(fs.exists(output)){ fs.delete(output, true); } FileOutputFormat.setOutputPath(job_tow, output); //先提交job_one执行 // boolean one_result = job_one.waitForCompletion(true); // if(one_result){ System.exit(job_tow.waitForCompletion(true)?0:1); // } } } 本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6166185.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

Hadoop MapReduce编程 API入门系列之MapReduce多种输入格式(十七)

代码 1 package zhouls.bigdata.myMapReduce.ScoreCount; 2 3 import java.io.DataInput; 4 import java.io.DataOutput; 5 import java.io.IOException; 6 import org.apache.hadoop.io.WritableComparable; 7 /** 8 * 学习成绩读写类 9 * 数据格式参考:19020090017 小讲 90 99 100 89 95 10 * @author Bertron 11 * 需要自定义一个 ScoreWritable 类实现 WritableComparable 接口,将学生各门成绩封装起来。 12 */ 13 public class ScoreWritable implements WritableComparable< Object > {//其实这里,跟TVPlayData一样的 14 // 注意: Hadoop通过Writable接口实现的序列化机制,不过没有提供比较功能,所以和java中的Comparable接口合并,提供一个接口WritableComparable。(自定义比较) 15 // Writable接口提供两个方法(write和readFields)。 16 17 18 private float Chinese; 19 private float Math; 20 private float English; 21 private float Physics; 22 private float Chemistry; 23 24 25 // 问:这里我们自己编程时,是一定要创建一个带有参的构造方法,为什么还要显式的写出来一个带无参的构造方法呢? 26 // 答:构造器其实就是构造对象实例的方法,无参数的构造方法是默认的,但是如果你创造了一个带有参数的构造方法,那么无参的构造方法必须显式的写出来,否则会编译失败。 27 28 public ScoreWritable(){}//java里的无参构造函数,是用来在创建对象时初始化对象 29 //在hadoop的每个自定义类型代码里,好比,现在的ScoreWritable,都必须要写无参构造函数。 30 31 32 //问:为什么我们在编程的时候,需要创建一个带有参的构造方法? 33 //答:就是能让赋值更灵活。构造一般就是初始化数值,你不想别人用你这个类的时候每次实例化都能用另一个构造动态初始化一些信息么(当然没有需要额外赋值就用默认的)。 34 35 public ScoreWritable(float Chinese,float Math,float English,float Physics,float Chemistry){//java里的有参构造函数,是用来在创建对象时初始化对象 36 this.Chinese = Chinese; 37 this.Math = Math; 38 this.English = English; 39 this.Physics = Physics; 40 this.Chemistry = Chemistry; 41 } 42 43 //问:其实set和get方法,这两个方法只是类中的setxxx和getxxx方法的总称, 44 // 那么,为什么在编程时,有set和set***两个,只有get***一个呢? 45 46 public void set(float Chinese,float Math,float English,float Physics,float Chemistry){ 47 this.Chinese = Chinese;//即float Chinese赋值给private float Chinese; 48 this.Math = Math; 49 this.English = English; 50 this.Physics = Physics; 51 this.Chemistry = Chemistry; 52 } 53 // public float get(float Chinese,float Math,float English,float Physics,float Chemistry){因为这是错误的,所以对于set可以分开,get只能是get*** 54 // return Chinese; 55 // return Math; 56 // return English; 57 // return Physics; 58 // return Chemistry; 59 // } 60 61 62 public float getChinese() {//拿值,得返回,所以需有返回类型float 63 return Chinese; 64 } 65 public void setChinese(float Chinese){//设值,不需,所以空返回类型 66 this.Chinese = Chinese; 67 } 68 public float getMath() {//拿值 69 return Math; 70 } 71 public void setMath(float Math){//设值 72 this.Math = Math; 73 } 74 public float getEnglish() {//拿值 75 return English; 76 } 77 public void setEnglish(float English){//设值 78 this.English = English; 79 } 80 public float getPhysics() {//拿值 81 return Physics; 82 } 83 public void setPhysics(float Physics){//设值 84 this.Physics = Physics; 85 } 86 public float getChemistry() {//拿值 87 return Chemistry; 88 } 89 public void setChemistry(float Chemistry) {//拿值 90 this.Chemistry = Chemistry; 91 } 92 93 // 实现WritableComparable的readFields()方法 94 // 对象不能传输的,需要转化成字节流! 95 // 将对象转换为字节流并写入到输出流out中是序列化,write 的过程(最好记!!!) 96 // 从输入流in中读取字节流反序列化为对象 是反序列化,readFields的过程(最好记!!!) 97 public void readFields(DataInput in) throws IOException {//拿代码来说的话,对象就是比如Chinese、Math。。。。 98 Chinese = in.readFloat();//因为,我们这里的对象是float类型,所以是readFloat() 99 Math = in.readFloat(); 100 English = in.readFloat();//注意:反序列化里,需要生成对象对吧,所以,是用到的是get那边对象 101 Physics = in.readFloat(); 102 Chemistry = in.readFloat(); 103 // in.readByte() 104 // in.readChar() 105 // in.readDouble() 106 // in.readLine() 107 // in.readFloat() 108 // in.readLong() 109 // in.readShort() 110 } 111 112 // 实现WritableComparable的write()方法,以便该数据能被序列化后完成网络传输或文件输出 113 // 将对象转换为字节流并写入到输出流out中是序列化,write 的过程(最好记!!!) 114 // 从输入流in中读取字节流反序列化为对象 是反序列化,readFields的过程(最好记!!!) 115 public void write(DataOutput out) throws IOException {//拿代码来说的话,对象就是比如Chinese、Math。。。。 116 out.writeFloat(Chinese);//因为,我们这里的对象是float类型,所以是writeFloat() 117 out.writeFloat(Math); 118 out.writeFloat(English);//注意:序列化里,需要对象对吧,所以,用到的是set那边的对象 119 out.writeFloat(Physics); 120 out.writeFloat(Chemistry); 121 // out.writeByte() 122 // out.writeChar() 123 // out.writeDouble() 124 // out.writeFloat() 125 // out.writeLong() 126 // out.writeShort() 127 // out.writeUTF() 128 } 129 130 public int compareTo(Object o) {//java里的比较,Java String.compareTo() 131 return 0; 132 } 133 134 135 // Hadoop中定义了两个序列化相关的接口:Writable 接口和 Comparable 接口,这两个接口可以合并成一个接口 WritableComparable。 136 // Writable 接口中定义了两个方法,分别为write(DataOutput out)和readFields(DataInput in) 137 // 所有实现了Comparable接口的对象都可以和自身相同类型的对象比较大小 138 139 140 // Hadoop中定义了两个序列化相关的接口:Writable 接口和 Comparable 接口,这两个接口可以合并成一个接口 WritableComparable。 141 // Writable 接口中定义了两个方法,分别为write(DataOutput out)和readFields(DataInput in) 142 // 所有实现了Comparable接口的对象都可以和自身相同类型的对象比较大小 143 144 145 // 源码是 146 // package java.lang; 147 // import java.util.*; 148 // public interface Comparable { 149 // /** 150 // * 将this对象和对象o进行比较,约定:返回负数为小于,零为大于,整数为大于 151 // */ 152 // public int compareTo(T o); 153 // } 154 155 } 1 package zhouls.bigdata.myMapReduce.ScoreCount; 2 3 import java.io.IOException; 4 import org.apache.hadoop.conf.Configuration; 5 import org.apache.hadoop.fs.FSDataInputStream; 6 import org.apache.hadoop.fs.FileSystem; 7 import org.apache.hadoop.fs.Path; 8 import org.apache.hadoop.io.Text; 9 import org.apache.hadoop.mapreduce.InputSplit; 10 import org.apache.hadoop.mapreduce.JobContext; 11 import org.apache.hadoop.mapreduce.RecordReader; 12 import org.apache.hadoop.mapreduce.TaskAttemptContext; 13 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 14 import org.apache.hadoop.mapreduce.lib.input.FileSplit; 15 import org.apache.hadoop.util.LineReader; 16 /** 17 * 自定义学生成绩读写InputFormat 18 * 数据格式参考:19020090017 小讲 90 99 100 89 95 19 * @author Bertron 20 */ 21 22 //其实这个程序,就是在实现InputFormat接口,TVPlayInputFormat是InputFormat接口的实现类 23 //比如 ScoreInputFormat extends FileInputFormat implements InputFormat。 24 25 //问:自定义输入格式 ScoreInputFormat 类,首先继承 FileInputFormat,然后分别重写 isSplitable() 方法和 createRecordReader() 方法。 26 27 public class ScoreInputFormat extends FileInputFormat<Text,ScoreWritable > {//自定义数据输入格式,其实这都是模仿源码的!可以去看 28 29 // 线路是: boolean isSplitable() -> RecordReader<Text,ScoreWritable> createRecordReader() -> ScoreRecordReader extends RecordReader<Text, ScoreWritable > 30 31 @Override 32 protected boolean isSplitable(JobContext context, Path filename) {//这是InputFormat的isSplitable方法 33 //isSplitable方法就是是否要切分文件,这个方法显示如果是压缩文件就不切分,非压缩文件就切分。 34 // 如果不允许分割,则isSplitable==false,则将第一个block、文件目录、开始位置为0,长度为整个文件的长度封装到一个InputSplit,加入splits中 35 // 如果文件长度不为0且支持分割,则isSplitable==true,获取block大小,默认是64MB 36 return false; //整个文件封装到一个InputSplit 37 //要么就是return true; //切分64MB大小的一块一块,再封装到InputSplit 38 } 39 40 @Override 41 public RecordReader<Text,ScoreWritable> createRecordReader(InputSplit inputsplit,TaskAttemptContext context) throws IOException, InterruptedException { 42 // RecordReader<k1, v1>是返回类型,返回的RecordReader对象的封装 43 // createRecordReader是方法,在这里是,ScoreInputFormat.createRecordReader。ScoreInputFormat是InputFormat类的实例 44 // InputSplit input和TaskAttemptContext context是传入参数 45 46 // isSplitable(),如果是压缩文件就不切分,整个文件封装到一个InputSplit 47 // isSplitable(),如果是非压缩文件就切,切分64MB大小的一块一块,再封装到InputSplit 48 49 //这里默认是系统实现的的RecordReader,按行读取,下面我们自定义这个类ScoreRecordReader。 50 //类似与Excel、WeiBo、TVPlayData代码写法 51 return new ScoreRecordReader();//新建一个ScoreRecordReader实例,所有才有了上面RecordReader<Text,ScoreWritable>,所以才如下ScoreRecordReader,写我们自己的 52 } 53 54 55 //RecordReader中的两个参数分别填写我们期望返回的key/value类型,我们期望key为Text类型,value为ScoreWritable类型封装学生所有成绩 56 public static class ScoreRecordReader extends RecordReader<Text, ScoreWritable > {//RecordReader<k1, v1>是一个整体 57 public LineReader in;//行读取器 58 public Text line;//每行数据类型 59 public Text lineKey;//自定义key类型,即k1 60 public ScoreWritable lineValue;//自定义value类型,即v1 61 62 @Override 63 public void close() throws IOException {//关闭输入流 64 if(in !=null){ 65 in.close(); 66 } 67 } 68 @Override 69 public Text getCurrentKey() throws IOException, InterruptedException {//获取当前的key,即CurrentKey 70 return lineKey;//返回类型是Text,即Text lineKey 71 } 72 @Override 73 public ScoreWritable getCurrentValue() throws IOException,InterruptedException {//获取当前的Value,即CurrentValue 74 return lineValue;//返回类型是ScoreWritable,即ScoreWritable lineValue 75 } 76 @Override 77 public float getProgress() throws IOException, InterruptedException {//获取进程,即Progress 78 return 0;//返回类型是float,即float 0 79 } 80 @Override 81 public void initialize(InputSplit input, TaskAttemptContext context) throws IOException, InterruptedException {//初始化,都是模板 82 FileSplit split=(FileSplit)input; 83 Configuration job=context.getConfiguration(); 84 Path file=split.getPath(); 85 FileSystem fs=file.getFileSystem(job); 86 87 FSDataInputStream filein=fs.open(file); 88 in=new LineReader(filein,job);//输入流in 89 line=new Text();//每行数据类型 90 lineKey=new Text();//自定义key类型,即k1。//新建一个Text实例作为自定义格式输入的key 91 lineValue = new ScoreWritable();//自定义value类型,即v1。//新建一个TVPlayData实例作为自定义格式输入的value 92 } 93 94 //此方法读取每行数据,完成自定义的key和value 95 @Override 96 public boolean nextKeyValue() throws IOException, InterruptedException {//这里面,才是篡改的重点 97 int linesize=in.readLine(line);//line是每行数据,我们这里用到的是in.readLine(str)这个构造函数,默认读完读到文件末尾。其实这里有三种。 98 99 // 是SplitLineReader.readLine -> SplitLineReader extends LineReader -> org.apache.hadoop.util.LineReader 100 101 // in.readLine(str)//这个构造方法执行时,会首先将value原来的值清空。默认读完读到文件末尾 102 // in.readLine(str, maxLineLength)//只读到maxLineLength行 103 // in.readLine(str, maxLineLength, maxBytesToConsume)//这个构造方法来实现不清空,前面读取的行的值 104 105 if(linesize==0) return false; 106 107 108 String[] pieces = line.toString().split("\\s+");//解析每行数据 109 //因为,我们这里是。默认读完读到文件末尾。line是Text类型。pieces是String[],即String数组。 110 111 if(pieces.length != 7){ 112 throw new IOException("Invalid record received"); 113 } 114 //将学生的每门成绩转换为 float 类型 115 float a,b,c,d,e; 116 try{ 117 a = Float.parseFloat(pieces[2].trim());//将String类型,如pieces[2]转换成,float类型,给a 118 b = Float.parseFloat(pieces[3].trim()); 119 c = Float.parseFloat(pieces[4].trim()); 120 d = Float.parseFloat(pieces[5].trim()); 121 e = Float.parseFloat(pieces[6].trim()); 122 }catch(NumberFormatException nfe){ 123 throw new IOException("Error parsing floating poing value in record"); 124 } 125 lineKey.set(pieces[0]+"\t"+pieces[1]);//完成自定义key数据 126 lineValue.set(a, b, c, d, e);//封装自定义value数据 127 // 或者写 128 // lineValue.set(Float.parseFloat(pieces[2].trim()),Float.parseFloat(pieces[3].trim()),Float.parseFloat(pieces[4].trim()), 129 // Float.parseFloat(pieces[5].trim()),Float.parseFloat(pieces[6].trim())); 130 131 // pieces[0] pieces[1] pieces[2] ... pieces[6] 132 // 19020090040 秦心芯 123 131 100 95 100 133 // 19020090006 李磊 99 92 100 90 100 134 // 19020090017 唐一建 90 99 100 89 95 135 // 19020090031 曾丽丽 100 99 97 79 96 136 // 19020090013 罗开俊 105 115 94 45 100 137 // 19020090039 周世海 114 116 93 31 97 138 // 19020090020 王正伟 109 98 88 47 99 139 // 19020090025 谢瑞彬 94 120 100 50 73 140 // 19020090007 于微 89 78 100 66 99 141 // 19020090012 刘小利 87 82 89 71 99 142 143 144 145 return true; 146 } 147 } 148 } 1 package zhouls.bigdata.myMapReduce.ScoreCount; 2 3 4 import java.io.IOException; 5 import org.apache.hadoop.conf.Configuration; 6 import org.apache.hadoop.conf.Configured; 7 import org.apache.hadoop.fs.FileSystem; 8 import org.apache.hadoop.fs.Path; 9 import org.apache.hadoop.io.Text; 10 import org.apache.hadoop.mapreduce.Job; 11 import org.apache.hadoop.mapreduce.Mapper; 12 import org.apache.hadoop.mapreduce.Reducer; 13 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 14 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 15 import org.apache.hadoop.util.Tool; 16 import org.apache.hadoop.util.ToolRunner; 17 /** 18 * 学生成绩统计Hadoop程序 19 * 数据格式参考:19020090017 小讲 90 99 100 89 95 20 * @author HuangBQ 21 */ 22 public class ScoreCount extends Configured implements Tool{ 23 public static class ScoreMapper extends Mapper<Text,ScoreWritable,Text,ScoreWritable>{ 24 @Override 25 protected void map(Text key, ScoreWritable value, Context context)throws IOException, InterruptedException{ 26 context.write(key, value);//写入key是k2,value是v2 27 // context.write(new Text(key), new ScoreWritable(value));等价 28 } 29 } 30 31 public static class ScoreReducer extends Reducer<Text,ScoreWritable,Text,Text>{ 32 private Text text = new Text(); 33 protected void reduce(Text Key, Iterable< ScoreWritable > Values, Context context)throws IOException, InterruptedException{ 34 float totalScore=0.0f; 35 float averageScore = 0.0f; 36 for(ScoreWritable ss:Values){ 37 totalScore +=ss.getChinese()+ss.getMath()+ss.getEnglish()+ss.getPhysics()+ss.getChemistry(); 38 averageScore +=totalScore/5; 39 } 40 text.set(totalScore+"\t"+averageScore); 41 context.write(Key, text);//写入Key是k3,text是v3 42 // context.write(new Text(Key),new Text(text));等价 43 } 44 } 45 46 47 public int run(String[] args) throws Exception{ 48 Configuration conf = new Configuration();//读取配置文件 49 50 Path mypath = new Path(args[1]); 51 FileSystem hdfs = mypath.getFileSystem(conf);//创建输出路径 52 if (hdfs.isDirectory(mypath)) 53 { 54 hdfs.delete(mypath, true); 55 } 56 57 Job job = new Job(conf, "ScoreCount");//新建任务 58 job.setJarByClass(ScoreCount.class);//设置主类 59 60 FileInputFormat.addInputPath(job, new Path(args[0]));// 输入路径 61 FileOutputFormat.setOutputPath(job, new Path(args[1]));// 输出路径 62 63 job.setMapperClass(ScoreMapper.class);// Mapper 64 job.setReducerClass(ScoreReducer.class);// Reducer 65 66 job.setMapOutputKeyClass(Text.class);// Mapper key输出类型 67 job.setMapOutputValueClass(ScoreWritable.class);// Mapper value输出类型 68 69 job.setInputFormatClass(ScoreInputFormat.class);//设置自定义输入格式 70 71 job.waitForCompletion(true); 72 return 0; 73 } 74 75 76 77 public static void main(String[] args) throws Exception{ 78 // String[] args0 = 79 // { 80 // "hdfs://HadoopMaster:9000/score/score.txt", 81 // "hdfs://HadoopMaster:9000/out/score/" 82 // }; 83 84 String[] args0 = 85 { 86 "./data/score/score.txt", 87 "./out/score/" 88 }; 89 90 int ec = ToolRunner.run(new Configuration(), new ScoreCount(), args0); 91 System.exit(ec); 92 } 93 } 本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6165667.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

Hadoop MapReduce编程 API入门系列之二次排序(十六)

2016-12-12 17:04:32,012 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Initializing JVM Metrics with processName=JobTracker, sessionId= 2016-12-12 17:04:33,056 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-12 17:04:33,059 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-12 17:04:33,083 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-12 17:04:33,161 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-12 17:04:33,562 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1173601391_0001 2016-12-12 17:04:34,242 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-12 17:04:34,244 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1173601391_0001 2016-12-12 17:04:34,247 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-12 17:04:34,264 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-12 17:04:34,371 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-12 17:04:34,373 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1173601391_0001_m_000000_0 2016-12-12 17:04:34,439 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 17:04:34,667 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@65bb90dc 2016-12-12 17:04:34,676 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/data/secondarySort/secondarySort.txt:0+120 2016-12-12 17:04:34,762 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-12 17:04:34,763 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-12 17:04:34,763 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-12 17:04:34,763 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-12 17:04:34,763 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-12 17:04:34,771 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-12 17:04:34,789 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-12 17:04:34,789 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-12 17:04:34,789 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-12 17:04:34,789 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 216; bufvoid = 104857600 2016-12-12 17:04:34,790 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214328(104857312); length = 69/6553600 2016-12-12 17:04:34,809 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-12 17:04:34,818 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1173601391_0001_m_000000_0 is done. And is in the process of committing 2016-12-12 17:04:34,838 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map 2016-12-12 17:04:34,838 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1173601391_0001_m_000000_0' done. 2016-12-12 17:04:34,838 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1173601391_0001_m_000000_0 2016-12-12 17:04:34,839 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-12 17:04:34,846 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-12 17:04:34,846 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1173601391_0001_r_000000_0 2016-12-12 17:04:34,864 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-12 17:04:34,950 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@59b59452 2016-12-12 17:04:34,954 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@73d5cf65 2016-12-12 17:04:34,974 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-12 17:04:35,011 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local1173601391_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-12 17:04:35,048 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#1 about to shuffle output of map attempt_local1173601391_0001_m_000000_0 decomp: 254 len: 258 to MEMORY 2016-12-12 17:04:35,060 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 254 bytes from map-output for attempt_local1173601391_0001_m_000000_0 2016-12-12 17:04:35,123 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 254, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->254 2016-12-12 17:04:35,125 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-12 17:04:35,126 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-12 17:04:35,126 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-12 17:04:35,136 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-12 17:04:35,137 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 244 bytes 2016-12-12 17:04:35,139 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 254 bytes to disk to satisfy reduce memory limit 2016-12-12 17:04:35,139 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 258 bytes from disk 2016-12-12 17:04:35,140 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-12 17:04:35,141 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-12 17:04:35,142 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 244 bytes 2016-12-12 17:04:35,143 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-12 17:04:35,150 INFO [org.apache.hadoop.conf.Configuration.deprecation] - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2016-12-12 17:04:35,158 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1173601391_0001_r_000000_0 is done. And is in the process of committing 2016-12-12 17:04:35,160 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-12 17:04:35,160 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local1173601391_0001_r_000000_0 is allowed to commit now 2016-12-12 17:04:35,166 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local1173601391_0001_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/secondarySort/_temporary/0/task_local1173601391_0001_r_000000 2016-12-12 17:04:35,167 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-12 17:04:35,167 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1173601391_0001_r_000000_0' done. 2016-12-12 17:04:35,167 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1173601391_0001_r_000000_0 2016-12-12 17:04:35,168 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-12 17:04:35,248 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1173601391_0001 running in uber mode : false 2016-12-12 17:04:35,249 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-12 17:04:35,251 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1173601391_0001 completed successfully 2016-12-12 17:04:35,271 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 33 File System Counters FILE: Number of bytes read=1186 FILE: Number of bytes written=394623 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=18 Map output records=18 Map output bytes=216 Map output materialized bytes=258 Input split bytes=145 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=258 Reduce input records=18 Reduce output records=18 Spilled Records=36 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=534773760 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=120 File Output Format Counters Bytes Written=115 代码 IntPair.java package zhouls.bigdata.myMapReduce.SecondarySort; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import org.apache.hadoop.io.WritableComparable; //第一步:自定义IntPair类,将示例数据中的key/value封装成一个整体作为Key,同时实现 WritableComparable 接口并重写其方法。 /** * 自己定义的key类应该实现WritableComparable接口 */ public class IntPair implements WritableComparable<IntPair>{//类似对应于如TextPair int first;//第一个成员变量 int second;//第二个成员变量 public void set(int left, int right){//赋值 first = left; second = right; } public int getFirst(){//读值 return first; } public int getSecond(){//读值 return second; } //反序列化,从流中的二进制转换成IntPair public void readFields(DataInput in) throws IOException{ first = in.readInt(); second = in.readInt(); } //序列化,将IntPair转化成使用流传送的二进制 public void write(DataOutput out) throws IOException{ out.writeInt(first); out.writeInt(second); } //key的比较 public int compareTo(IntPair o){ // TODO Auto-generated method stub if (first != o.first){ return first < o.first ? -1 : 1; }else if (second != o.second) { return second < o.second ? -1 : 1; }else { return 0; } } @Override public int hashCode(){ return first * 157 + second; } @Override public boolean equals(Object right){ if (right == null) return false; if (this == right) return true; if (right instanceof IntPair){ IntPair r = (IntPair) right; return r.first == first && r.second == second; }else{ return false; } } } SecondarySort.java package zhouls.bigdata.myMapReduce.SecondarySort; import zhouls.bigdata.myMapReduce.Join.JoinRecordAndStationName; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.io.WritableComparator; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Partitioner; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; /* SecondarySort内容是 40 20 40 10 40 30 40 5 40 1 30 30 30 20 30 10 30 1 20 20 20 10 20 1 50 50 50 40 50 30 50 20 50 10 50 1 */ public class SecondarySort extends Configured implements Tool{ // 自定义map public static class Map extends Mapper<LongWritable, Text, IntPair, IntWritable>{ private final IntPair intkey = new IntPair(); private final IntWritable intvalue = new IntWritable(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{ String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); int left = 0; int right = 0; if (tokenizer.hasMoreTokens()){ left = Integer.parseInt(tokenizer.nextToken()); if (tokenizer.hasMoreTokens()) right = Integer.parseInt(tokenizer.nextToken()); intkey.set(left, right);//设为k2 intvalue.set(right);//设为v2 context.write(intkey,intvalue);//写入intkeyk2,intvalue是v2 // context.write(new IntPair(intkey),new IntWritable(intvalue));等价 } } } //第二步:自定义分区函数类FirstPartitioner,根据 IntPair 中的first实现分区。 /** * 分区函数类。根据first确定Partition。 */ public static class FirstPartitioner extends Partitioner< IntPair, IntWritable>{ @Override public int getPartition(IntPair key, IntWritable value,int numPartitions){ return Math.abs(key.getFirst() * 127) % numPartitions; } } //第三步:自定义 SortComparator 实现 IntPair 类中的first和second排序。本课程中没有使用这种方法,而是使用 IntPair 中的compareTo()方法实现的。 //第四步:自定义 GroupingComparator 类,实现分区内的数据分组。 /** *继承WritableComparator */ public static class GroupingComparator extends WritableComparator{ protected GroupingComparator(){ super(IntPair.class, true); } @Override //Compare two WritableComparables. public int compare(WritableComparable w1, WritableComparable w2){ IntPair ip1 = (IntPair) w1; IntPair ip2 = (IntPair) w2; int l = ip1.getFirst(); int r = ip2.getFirst(); return l == r ? 0 : (l < r ? -1 : 1); } } // 自定义reduce public static class Reduce extends Reducer<IntPair, IntWritable, Text, IntWritable>{ private final Text left = new Text(); public void reduce(IntPair key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException{ left.set(Integer.toString(key.getFirst()));//设为k3 for (IntWritable val : values){ context.write(left, val);//写入left是k3,val是v3 // context.write(new Text(left),new IntWritable(val));等价 } } } public int run(String[] args)throws Exception{ // TODO Auto-generated method stub Configuration conf = new Configuration(); Path mypath=new Path(args[1]); FileSystem hdfs = mypath.getFileSystem(conf); if (hdfs.isDirectory(mypath)){ hdfs.delete(mypath, true); } Job job = new Job(conf, "secondarysort"); job.setJarByClass(SecondarySort.class); FileInputFormat.setInputPaths(job, new Path(args[0]));//输入路径 FileOutputFormat.setOutputPath(job, new Path(args[1]));//输出路径 job.setMapperClass(Map.class);// Mapper job.setReducerClass(Reduce.class);// Reducer //job.setNumReducerTask(3); job.setPartitionerClass(FirstPartitioner.class);// 分区函数 //job.setSortComparatorClass(KeyComparator.Class);//本课程并没有自定义SortComparator,而是使用IntPair自带的排序 job.setGroupingComparatorClass(GroupingComparator.class);// 分组函数 job.setMapOutputKeyClass(IntPair.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); return job.waitForCompletion(true) ? 0 : 1; } /** * @param args * @throws Exception */ public static void main(String[] args) throws Exception{ // TODO Auto-generated method stub // String[] args0={"hdfs://HadoopMaster:9000/secondarySort/secondarySort.txt", // "hdfs://HadoopMaster:9000/out/secondarySort"}; String[] args0={"./data/secondarySort/secondarySort.txt", "./out/secondarySort"}; int ec =ToolRunner.run(new Configuration(),new SecondarySort(),args0); System.exit(ec); } } 本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6165256.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

Android 2.2 r1 API 中文文档系列(12) —— Button

正文 一、结构 public class Button extends TextView java.lang.Object android.view.View android.widget.TextView android.widget.Button 已知直接子类 CompoundButton 已知间接子类CheckBox,RadioButton, ToggleButton 二、概述 代表一个按钮部件。用户通过按下按钮,或者点击按钮来执行一个动作。以下是一个按钮在activity中典型的应用: 然后,你能在xml布局中通过button的android:onClick属性指定一个方法,以替代在activity中为button设置OnClickListener。例如: 现在,当用户点击按钮时,Android系统调用activity的selfDestruct(View)方法。为了正确执行,这个方法必须是public并且仅接受一个View类型的参数。例如: 三、按钮样式 每个按钮的样式默认为系统按钮的背景,不同的设备、不同的平台版本有不同按钮风格。如你不满意默认的按钮样式,想对其定制以符合您应用程序的设计,那么你能用state list drawable替换按钮的背景图片。一个状态列表drawable是一个在XML中定义的drawable资源,根据当前按钮的状态改变其图片。一旦你在XML中定义了一个状态列表drawable,你可以将它应用于你的android:background属性。欲了解更多信息和示例,参见State List Drawable. 实现一个按钮的例子可参见Form Stuff tutorial 四、XML属性 参见Button、TextView、View的XML属性。 本文转自over140 51CTO博客,原文链接:http://blog.51cto.com/over140/582692,如需转载请自行联系原作者

优秀的个人博客,低调大师

Android 2.2 r1 API 中文文档系列(11) —— RadioButton

正文 一、结构 public class RadioButton extends CompoundButton java.lang.Object android.view.View android.widget.TextView android.widget.Button android.widget.CompoundButton android.widget.RadioButton 二、概述 单选按钮是一种双状态的按钮,可以选择或不选中。在单选按钮没有被选中时,用户能够按下或点击来选中它。但是,与复选框相反,用户一旦选中就不能够取消选中(译者注:可以通过代码来控制,界面上点击的效果是一旦选中之后就不能取消选中了)。 多个单选按钮通常与RadioGroup同时使用。当一个单选组(RadioGroup)包含几个单选按钮时,选中其中一个的同时将取消其它选中的单选按钮。(译者注:示例参见这里) 三、公共方法 public void toggle () 将单选按钮更改为与当前选中状态相反的状态。 如果这个单选按钮已经选中,这个方法将不切换单选按钮。(译者注:请看源码: 本文转自over140 51CTO博客,原文链接:http://blog.51cto.com/over140/582695,如需转载请自行联系原作者

优秀的个人博客,低调大师

Hadoop MapReduce编程 API入门系列之网页排序(二十八)

Map output bytes=247 Map output materialized bytes=275 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=275 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1439694848 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=119 File Output Format Counters Bytes Written=113 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=17 success. 17 2016-12-13 19:07:44,783 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:44,796 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:44,799 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:45,231 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:45,245 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:45,266 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local455505407_0021 2016-12-13 19:07:45,483 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:45,484 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local455505407_0021 2016-12-13 19:07:45,484 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:45,485 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:45,495 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:45,495 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local455505407_0021_m_000000_0 2016-12-13 19:07:45,500 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:45,559 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@223b788b 2016-12-13 19:07:45,565 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr20/part-r-00000:0+101 2016-12-13 19:07:45,597 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:45,597 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:45,597 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:45,597 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:45,598 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:45,600 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:45,608 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:45,609 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:45,609 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:45,609 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 255; bufvoid = 104857600 2016-12-13 19:07:45,610 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:45,625 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:45,631 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local455505407_0021_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:45,638 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr20/part-r-00000:0+101 2016-12-13 19:07:45,639 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local455505407_0021_m_000000_0' done. 2016-12-13 19:07:45,639 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local455505407_0021_m_000000_0 2016-12-13 19:07:45,639 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:45,640 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:45,641 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local455505407_0021_r_000000_0 2016-12-13 19:07:45,645 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:45,690 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@7d5e1574 2016-12-13 19:07:45,691 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@2bd2b2f9 2016-12-13 19:07:45,703 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:45,704 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local455505407_0021_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:45,709 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#21 about to shuffle output of map attempt_local455505407_0021_m_000000_0 decomp: 279 len: 283 to MEMORY 2016-12-13 19:07:45,710 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 279 bytes from map-output for attempt_local455505407_0021_m_000000_0 2016-12-13 19:07:45,711 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 279, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->279 2016-12-13 19:07:45,712 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:45,714 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:45,715 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:45,729 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:45,730 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 275 bytes 2016-12-13 19:07:45,732 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 279 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:45,734 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 283 bytes from disk 2016-12-13 19:07:45,734 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:45,734 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:45,736 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 275 bytes 2016-12-13 19:07:45,737 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.21104025855884012 3___________ *********** new pageRank value is 0.3575070100293671 5___________ *********** new pageRank value is 0.4016854782039228 6___________ *********** new pageRank value is 0.1286040574733812 1___________ 2016-12-13 19:07:45,749 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local455505407_0021_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:45,753 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:45,753 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local455505407_0021_r_000000_0 is allowed to commit now 2016-12-13 19:07:45,762 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local455505407_0021_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr21/_temporary/0/task_local455505407_0021_r_000000 2016-12-13 19:07:45,764 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:45,764 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local455505407_0021_r_000000_0' done. 2016-12-13 19:07:45,764 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local455505407_0021_r_000000_0 2016-12-13 19:07:45,765 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:46,485 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local455505407_0021 running in uber mode : false 2016-12-13 19:07:46,486 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:46,487 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local455505407_0021 completed successfully 2016-12-13 19:07:46,498 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=35976 FILE: Number of bytes written=8181702 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=255 Map output materialized bytes=283 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=283 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1588592640 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=121 File Output Format Counters Bytes Written=111 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=15 success. 15 2016-12-13 19:07:46,508 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:46,516 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:46,519 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:46,868 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:46,879 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:46,896 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1152686492_0022 2016-12-13 19:07:47,037 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:47,037 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1152686492_0022 2016-12-13 19:07:47,037 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:47,039 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:47,045 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:47,045 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1152686492_0022_m_000000_0 2016-12-13 19:07:47,048 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:47,105 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@35f8fa7b 2016-12-13 19:07:47,109 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr21/part-r-00000:0+99 2016-12-13 19:07:47,131 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:47,132 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:47,132 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:47,132 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:47,132 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:47,133 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:47,139 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:47,140 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:47,140 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:47,140 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 248; bufvoid = 104857600 2016-12-13 19:07:47,140 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:47,155 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:47,160 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1152686492_0022_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:47,165 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr21/part-r-00000:0+99 2016-12-13 19:07:47,165 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1152686492_0022_m_000000_0' done. 2016-12-13 19:07:47,165 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1152686492_0022_m_000000_0 2016-12-13 19:07:47,165 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:47,166 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:47,167 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1152686492_0022_r_000000_0 2016-12-13 19:07:47,169 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:47,215 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6df4957a 2016-12-13 19:07:47,215 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5481b4fb 2016-12-13 19:07:47,217 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:47,219 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local1152686492_0022_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:47,224 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#22 about to shuffle output of map attempt_local1152686492_0022_m_000000_0 decomp: 272 len: 276 to MEMORY 2016-12-13 19:07:47,225 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 272 bytes from map-output for attempt_local1152686492_0022_m_000000_0 2016-12-13 19:07:47,225 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 272, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->272 2016-12-13 19:07:47,227 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:47,228 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:47,228 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:47,237 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:47,238 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 268 bytes 2016-12-13 19:07:47,240 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 272 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:47,241 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 276 bytes from disk 2016-12-13 19:07:47,241 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:47,242 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:47,243 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 268 bytes 2016-12-13 19:07:47,244 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.2082163282366672 2___________ *********** new pageRank value is 0.3525651625503612 4___________ *********** new pageRank value is 0.396037682951149 5___________ *********** new pageRank value is 0.12719210988750704 1___________ 2016-12-13 19:07:47,256 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1152686492_0022_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:47,260 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:47,261 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local1152686492_0022_r_000000_0 is allowed to commit now 2016-12-13 19:07:47,270 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local1152686492_0022_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr22/_temporary/0/task_local1152686492_0022_r_000000 2016-12-13 19:07:47,273 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:47,273 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1152686492_0022_r_000000_0' done. 2016-12-13 19:07:47,273 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1152686492_0022_r_000000_0 2016-12-13 19:07:47,273 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:48,038 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1152686492_0022 running in uber mode : false 2016-12-13 19:07:48,039 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:48,040 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1152686492_0022 completed successfully 2016-12-13 19:07:48,048 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=37782 FILE: Number of bytes written=8572508 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=248 Map output materialized bytes=276 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=276 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=13 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1588592640 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=119 File Output Format Counters Bytes Written=110 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=12 success. 12 2016-12-13 19:07:48,059 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:48,070 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:48,073 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:48,425 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:48,446 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:48,465 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1854065889_0023 2016-12-13 19:07:48,612 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:48,613 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:48,613 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1854065889_0023 2016-12-13 19:07:48,615 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:48,621 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:48,622 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1854065889_0023_m_000000_0 2016-12-13 19:07:48,624 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:48,668 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@135c5527 2016-12-13 19:07:48,672 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr22/part-r-00000:0+98 2016-12-13 19:07:48,709 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:48,710 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:48,710 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:48,710 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:48,710 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:48,711 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:48,720 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:48,722 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:48,722 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:48,722 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 247; bufvoid = 104857600 2016-12-13 19:07:48,722 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:48,738 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:48,742 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1854065889_0023_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:48,746 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr22/part-r-00000:0+98 2016-12-13 19:07:48,746 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1854065889_0023_m_000000_0' done. 2016-12-13 19:07:48,746 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1854065889_0023_m_000000_0 2016-12-13 19:07:48,746 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:48,747 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:48,748 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1854065889_0023_r_000000_0 2016-12-13 19:07:48,751 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:48,798 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@21287ab4 2016-12-13 19:07:48,798 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@688fd06d 2016-12-13 19:07:48,801 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:48,802 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local1854065889_0023_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:48,807 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#23 about to shuffle output of map attempt_local1854065889_0023_m_000000_0 decomp: 271 len: 275 to MEMORY 2016-12-13 19:07:48,809 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 271 bytes from map-output for attempt_local1854065889_0023_m_000000_0 2016-12-13 19:07:48,811 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 271, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->271 2016-12-13 19:07:48,815 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:48,817 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:48,818 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:48,831 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:48,832 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 267 bytes 2016-12-13 19:07:48,834 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 271 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:48,836 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 275 bytes from disk 2016-12-13 19:07:48,836 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:48,836 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:48,837 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 267 bytes 2016-12-13 19:07:48,839 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.20581601525423832 2___________ *********** new pageRank value is 0.3483646014570123 4___________ *********** new pageRank value is 0.3912370348699975 4___________ *********** new pageRank value is 0.12599193950058354 1___________ 2016-12-13 19:07:48,854 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1854065889_0023_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:48,858 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:48,859 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local1854065889_0023_r_000000_0 is allowed to commit now 2016-12-13 19:07:48,870 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local1854065889_0023_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr23/_temporary/0/task_local1854065889_0023_r_000000 2016-12-13 19:07:48,879 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:48,880 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1854065889_0023_r_000000_0' done. 2016-12-13 19:07:48,882 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1854065889_0023_r_000000_0 2016-12-13 19:07:48,883 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:49,614 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1854065889_0023 running in uber mode : false 2016-12-13 19:07:49,615 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:49,617 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1854065889_0023 completed successfully 2016-12-13 19:07:49,637 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=39570 FILE: Number of bytes written=8963305 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=247 Map output materialized bytes=275 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=275 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=13 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1755316224 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=118 File Output Format Counters Bytes Written=112 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=11 success. 11 2016-12-13 19:07:49,648 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:49,656 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:49,659 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:50,057 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:50,073 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:50,093 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local2101720639_0024 2016-12-13 19:07:50,248 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:50,249 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local2101720639_0024 2016-12-13 19:07:50,251 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:50,252 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:50,262 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:50,263 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local2101720639_0024_m_000000_0 2016-12-13 19:07:50,267 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:50,324 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@63a803dc 2016-12-13 19:07:50,329 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr23/part-r-00000:0+100 2016-12-13 19:07:50,405 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:50,405 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:50,405 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:50,405 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:50,406 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:50,407 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:50,417 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:50,418 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:50,418 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:50,418 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 253; bufvoid = 104857600 2016-12-13 19:07:50,418 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:50,437 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:50,442 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local2101720639_0024_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:50,447 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr23/part-r-00000:0+100 2016-12-13 19:07:50,448 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local2101720639_0024_m_000000_0' done. 2016-12-13 19:07:50,448 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local2101720639_0024_m_000000_0 2016-12-13 19:07:50,448 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:50,450 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:50,450 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local2101720639_0024_r_000000_0 2016-12-13 19:07:50,454 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:50,523 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@7e37d95 2016-12-13 19:07:50,524 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@6849c53e 2016-12-13 19:07:50,526 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:50,535 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local2101720639_0024_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:50,539 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#24 about to shuffle output of map attempt_local2101720639_0024_m_000000_0 decomp: 277 len: 281 to MEMORY 2016-12-13 19:07:50,541 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 277 bytes from map-output for attempt_local2101720639_0024_m_000000_0 2016-12-13 19:07:50,541 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 277, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->277 2016-12-13 19:07:50,542 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:50,544 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:50,545 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:50,559 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:50,559 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 273 bytes 2016-12-13 19:07:50,562 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 277 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:50,563 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 281 bytes from disk 2016-12-13 19:07:50,563 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:50,563 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:50,564 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 273 bytes 2016-12-13 19:07:50,566 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.20377573981974895 2___________ *********** new pageRank value is 0.3447941205905482 3___________ *********** new pageRank value is 0.3871564855262084 4___________ *********** new pageRank value is 0.12497180648305128 1___________ 2016-12-13 19:07:50,580 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local2101720639_0024_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:50,584 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:50,585 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local2101720639_0024_r_000000_0 is allowed to commit now 2016-12-13 19:07:50,595 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local2101720639_0024_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr24/_temporary/0/task_local2101720639_0024_r_000000 2016-12-13 19:07:50,597 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:50,597 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local2101720639_0024_r_000000_0' done. 2016-12-13 19:07:50,598 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local2101720639_0024_r_000000_0 2016-12-13 19:07:50,598 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:51,250 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local2101720639_0024 running in uber mode : false 2016-12-13 19:07:51,252 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:51,253 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local2101720639_0024 completed successfully 2016-12-13 19:07:51,263 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=41372 FILE: Number of bytes written=9354121 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=253 Map output materialized bytes=281 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=281 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1755316224 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=120 File Output Format Counters Bytes Written=112 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=10 success. 10 2016-12-13 19:07:51,279 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:51,292 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:51,295 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:51,881 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:51,893 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:51,913 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local756232932_0025 2016-12-13 19:07:52,079 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:52,080 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local756232932_0025 2016-12-13 19:07:52,080 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:52,081 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:52,087 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:52,087 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local756232932_0025_m_000000_0 2016-12-13 19:07:52,090 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:52,137 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@2bbddda 2016-12-13 19:07:52,141 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr24/part-r-00000:0+100 2016-12-13 19:07:52,165 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:52,165 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:52,165 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:52,165 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:52,166 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:52,167 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:52,175 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:52,175 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:52,175 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:52,175 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 251; bufvoid = 104857600 2016-12-13 19:07:52,176 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:52,193 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:52,198 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local756232932_0025_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:52,203 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr24/part-r-00000:0+100 2016-12-13 19:07:52,203 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local756232932_0025_m_000000_0' done. 2016-12-13 19:07:52,204 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local756232932_0025_m_000000_0 2016-12-13 19:07:52,204 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:52,205 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:52,205 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local756232932_0025_r_000000_0 2016-12-13 19:07:52,209 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:52,264 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@74f508b9 2016-12-13 19:07:52,264 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@30fe5ac 2016-12-13 19:07:52,266 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:52,268 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local756232932_0025_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:52,274 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#25 about to shuffle output of map attempt_local756232932_0025_m_000000_0 decomp: 275 len: 279 to MEMORY 2016-12-13 19:07:52,276 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 275 bytes from map-output for attempt_local756232932_0025_m_000000_0 2016-12-13 19:07:52,277 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 275, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->275 2016-12-13 19:07:52,278 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:52,280 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:52,280 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:52,294 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:52,295 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 271 bytes 2016-12-13 19:07:52,298 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 275 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:52,299 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 279 bytes from disk 2016-12-13 19:07:52,299 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:52,300 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:52,301 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 271 bytes 2016-12-13 19:07:52,303 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.20204150634863857 1___________ *********** new pageRank value is 0.34175921352732863 3___________ *********** new pageRank value is 0.38368802025726273 3___________ *********** new pageRank value is 0.12410468942339331 0___________ 2016-12-13 19:07:52,321 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local756232932_0025_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:52,326 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:52,327 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local756232932_0025_r_000000_0 is allowed to commit now 2016-12-13 19:07:52,339 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local756232932_0025_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr25/_temporary/0/task_local756232932_0025_r_000000 2016-12-13 19:07:52,343 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:52,344 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local756232932_0025_r_000000_0' done. 2016-12-13 19:07:52,344 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local756232932_0025_r_000000_0 2016-12-13 19:07:52,346 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:53,080 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local756232932_0025 running in uber mode : false 2016-12-13 19:07:53,081 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:53,082 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local756232932_0025 completed successfully 2016-12-13 19:07:53,092 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=43182 FILE: Number of bytes written=9742963 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=251 Map output materialized bytes=279 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=279 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1774190592 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=120 File Output Format Counters Bytes Written=114 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=7 success. 7 2016-12-13 19:07:53,109 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:53,116 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:53,119 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:53,539 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:53,553 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:53,574 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local525031809_0026 2016-12-13 19:07:53,728 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:53,729 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local525031809_0026 2016-12-13 19:07:53,729 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:53,730 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:53,736 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:53,737 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local525031809_0026_m_000000_0 2016-12-13 19:07:53,739 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:53,787 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@4190fa1c 2016-12-13 19:07:53,793 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr25/part-r-00000:0+102 2016-12-13 19:07:53,825 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:53,825 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:53,826 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:53,826 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:53,826 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:53,827 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:53,835 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:53,836 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:53,836 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:53,837 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 258; bufvoid = 104857600 2016-12-13 19:07:53,837 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:53,859 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:53,864 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local525031809_0026_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:53,868 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr25/part-r-00000:0+102 2016-12-13 19:07:53,869 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local525031809_0026_m_000000_0' done. 2016-12-13 19:07:53,869 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local525031809_0026_m_000000_0 2016-12-13 19:07:53,869 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:53,870 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:53,871 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local525031809_0026_r_000000_0 2016-12-13 19:07:53,874 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:53,929 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@4ab0342e 2016-12-13 19:07:53,930 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@7631de50 2016-12-13 19:07:53,932 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:53,934 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local525031809_0026_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:53,940 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#26 about to shuffle output of map attempt_local525031809_0026_m_000000_0 decomp: 282 len: 286 to MEMORY 2016-12-13 19:07:53,942 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 282 bytes from map-output for attempt_local525031809_0026_m_000000_0 2016-12-13 19:07:53,942 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 282, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->282 2016-12-13 19:07:53,944 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:53,946 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:53,947 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:53,959 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:53,960 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 278 bytes 2016-12-13 19:07:53,962 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 282 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:53,963 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 286 bytes from disk 2016-12-13 19:07:53,963 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:53,963 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:53,964 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 278 bytes 2016-12-13 19:07:53,965 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.20056740860933667 1___________ *********** new pageRank value is 0.3391795418124502 2___________ *********** new pageRank value is 0.38073982450317145 2___________ *********** new pageRank value is 0.1233676401981714 0___________ 2016-12-13 19:07:53,977 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local525031809_0026_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:53,981 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:53,981 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local525031809_0026_r_000000_0 is allowed to commit now 2016-12-13 19:07:53,988 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local525031809_0026_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr26/_temporary/0/task_local525031809_0026_r_000000 2016-12-13 19:07:53,990 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:53,990 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local525031809_0026_r_000000_0' done. 2016-12-13 19:07:53,991 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local525031809_0026_r_000000_0 2016-12-13 19:07:53,991 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:54,729 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local525031809_0026 running in uber mode : false 2016-12-13 19:07:54,730 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:54,730 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local525031809_0026 completed successfully 2016-12-13 19:07:54,739 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=45006 FILE: Number of bytes written=10131824 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=258 Map output materialized bytes=286 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=286 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1774190592 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=122 File Output Format Counters Bytes Written=112 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=5 success. 5 2016-12-13 19:07:54,749 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:54,756 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:54,759 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:55,178 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:55,190 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:55,211 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local582978068_0027 2016-12-13 19:07:55,356 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:55,357 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local582978068_0027 2016-12-13 19:07:55,357 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:55,358 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:55,366 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:55,367 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local582978068_0027_m_000000_0 2016-12-13 19:07:55,371 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:55,428 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@4b81e280 2016-12-13 19:07:55,432 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr26/part-r-00000:0+100 2016-12-13 19:07:55,459 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:55,459 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:55,459 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:55,459 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:55,459 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:55,461 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:55,469 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:55,470 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:55,470 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:55,470 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 251; bufvoid = 104857600 2016-12-13 19:07:55,471 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:55,486 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:55,493 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local582978068_0027_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:55,499 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr26/part-r-00000:0+100 2016-12-13 19:07:55,500 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local582978068_0027_m_000000_0' done. 2016-12-13 19:07:55,500 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local582978068_0027_m_000000_0 2016-12-13 19:07:55,501 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:55,502 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:55,502 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local582978068_0027_r_000000_0 2016-12-13 19:07:55,507 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:55,570 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@314779f4 2016-12-13 19:07:55,571 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4b979b72 2016-12-13 19:07:55,573 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:55,577 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local582978068_0027_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:55,587 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#27 about to shuffle output of map attempt_local582978068_0027_m_000000_0 decomp: 275 len: 279 to MEMORY 2016-12-13 19:07:55,589 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 275 bytes from map-output for attempt_local582978068_0027_m_000000_0 2016-12-13 19:07:55,590 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 275, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->275 2016-12-13 19:07:55,591 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:55,595 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:55,595 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:55,609 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:55,609 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 271 bytes 2016-12-13 19:07:55,611 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 275 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:55,613 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 279 bytes from disk 2016-12-13 19:07:55,613 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:55,613 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:55,614 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 271 bytes 2016-12-13 19:07:55,616 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.19931442541384786 1___________ *********** new pageRank value is 0.33698682115703876 2___________ *********** new pageRank value is 0.37823385762480544 2___________ *********** new pageRank value is 0.12274114865896807 0___________ 2016-12-13 19:07:55,636 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local582978068_0027_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:55,640 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:55,640 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local582978068_0027_r_000000_0 is allowed to commit now 2016-12-13 19:07:55,649 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local582978068_0027_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr27/_temporary/0/task_local582978068_0027_r_000000 2016-12-13 19:07:55,657 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:55,657 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local582978068_0027_r_000000_0' done. 2016-12-13 19:07:55,658 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local582978068_0027_r_000000_0 2016-12-13 19:07:55,658 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:56,357 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local582978068_0027 running in uber mode : false 2016-12-13 19:07:56,358 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:56,359 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local582978068_0027 completed successfully 2016-12-13 19:07:56,366 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=46826 FILE: Number of bytes written=10520671 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=251 Map output materialized bytes=279 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=279 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1889533952 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=120 File Output Format Counters Bytes Written=114 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=5 success. 5 2016-12-13 19:07:56,374 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:56,384 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:56,386 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:56,854 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:56,866 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:56,880 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1006109257_0028 2016-12-13 19:07:57,050 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:57,051 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1006109257_0028 2016-12-13 19:07:57,051 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:57,052 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:57,059 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:57,059 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1006109257_0028_m_000000_0 2016-12-13 19:07:57,062 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:57,114 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@753f2425 2016-12-13 19:07:57,118 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr27/part-r-00000:0+102 2016-12-13 19:07:57,152 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:57,152 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:57,152 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:57,153 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:57,153 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:57,154 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:57,162 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:57,163 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:57,163 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:57,163 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 258; bufvoid = 104857600 2016-12-13 19:07:57,163 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:57,175 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:57,179 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1006109257_0028_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:57,183 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr27/part-r-00000:0+102 2016-12-13 19:07:57,183 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1006109257_0028_m_000000_0' done. 2016-12-13 19:07:57,184 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1006109257_0028_m_000000_0 2016-12-13 19:07:57,184 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:57,185 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:57,185 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1006109257_0028_r_000000_0 2016-12-13 19:07:57,188 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:57,237 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@b901239 2016-12-13 19:07:57,237 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@1f9c621d 2016-12-13 19:07:57,239 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:57,242 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local1006109257_0028_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:57,246 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#28 about to shuffle output of map attempt_local1006109257_0028_m_000000_0 decomp: 282 len: 286 to MEMORY 2016-12-13 19:07:57,247 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 282 bytes from map-output for attempt_local1006109257_0028_m_000000_0 2016-12-13 19:07:57,247 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 282, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->282 2016-12-13 19:07:57,248 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:57,250 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:57,250 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:57,259 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:57,260 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 278 bytes 2016-12-13 19:07:57,263 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 282 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:57,263 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 286 bytes from disk 2016-12-13 19:07:57,264 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:57,264 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:57,265 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 278 bytes 2016-12-13 19:07:57,266 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.1982493894905423 1___________ *********** new pageRank value is 0.33512300847148907 1___________ *********** new pageRank value is 0.3761037861635444 2___________ *********** new pageRank value is 0.12220863080088534 0___________ 2016-12-13 19:07:57,280 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local1006109257_0028_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:57,284 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:57,284 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local1006109257_0028_r_000000_0 is allowed to commit now 2016-12-13 19:07:57,295 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local1006109257_0028_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr28/_temporary/0/task_local1006109257_0028_r_000000 2016-12-13 19:07:57,299 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:57,299 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local1006109257_0028_r_000000_0' done. 2016-12-13 19:07:57,299 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local1006109257_0028_r_000000_0 2016-12-13 19:07:57,300 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:58,051 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1006109257_0028 running in uber mode : false 2016-12-13 19:07:58,052 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:58,053 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1006109257_0028 completed successfully 2016-12-13 19:07:58,057 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=48650 FILE: Number of bytes written=10911508 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=258 Map output materialized bytes=286 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=286 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1889533952 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=122 File Output Format Counters Bytes Written=112 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=4 success. 4 2016-12-13 19:07:58,066 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2016-12-13 19:07:58,073 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2016-12-13 19:07:58,076 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-12-13 19:07:58,528 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1 2016-12-13 19:07:58,540 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2016-12-13 19:07:58,559 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local626952993_0029 2016-12-13 19:07:58,716 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2016-12-13 19:07:58,717 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local626952993_0029 2016-12-13 19:07:58,717 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2016-12-13 19:07:58,718 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2016-12-13 19:07:58,725 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2016-12-13 19:07:58,726 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local626952993_0029_m_000000_0 2016-12-13 19:07:58,728 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:58,780 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3e0a19cb 2016-12-13 19:07:58,784 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr28/part-r-00000:0+100 2016-12-13 19:07:58,831 INFO [org.apache.hadoop.mapred.MapTask] - (EQUATOR) 0 kvi 26214396(104857584) 2016-12-13 19:07:58,831 INFO [org.apache.hadoop.mapred.MapTask] - mapreduce.task.io.sort.mb: 100 2016-12-13 19:07:58,831 INFO [org.apache.hadoop.mapred.MapTask] - soft limit at 83886080 2016-12-13 19:07:58,832 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufvoid = 104857600 2016-12-13 19:07:58,832 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396; length = 6553600 2016-12-13 19:07:58,833 INFO [org.apache.hadoop.mapred.MapTask] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2016-12-13 19:07:58,839 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 2016-12-13 19:07:58,839 INFO [org.apache.hadoop.mapred.MapTask] - Starting flush of map output 2016-12-13 19:07:58,840 INFO [org.apache.hadoop.mapred.MapTask] - Spilling map output 2016-12-13 19:07:58,840 INFO [org.apache.hadoop.mapred.MapTask] - bufstart = 0; bufend = 252; bufvoid = 104857600 2016-12-13 19:07:58,840 INFO [org.apache.hadoop.mapred.MapTask] - kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600 2016-12-13 19:07:58,853 INFO [org.apache.hadoop.mapred.MapTask] - Finished spill 0 2016-12-13 19:07:58,858 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local626952993_0029_m_000000_0 is done. And is in the process of committing 2016-12-13 19:07:58,863 INFO [org.apache.hadoop.mapred.LocalJobRunner] - file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr28/part-r-00000:0+100 2016-12-13 19:07:58,863 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local626952993_0029_m_000000_0' done. 2016-12-13 19:07:58,864 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local626952993_0029_m_000000_0 2016-12-13 19:07:58,864 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2016-12-13 19:07:58,865 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for reduce tasks 2016-12-13 19:07:58,866 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local626952993_0029_r_000000_0 2016-12-13 19:07:58,870 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2016-12-13 19:07:58,926 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6f36ac8a 2016-12-13 19:07:58,926 INFO [org.apache.hadoop.mapred.ReduceTask] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@6e79d110 2016-12-13 19:07:58,928 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - MergerManager: memoryLimit=1327077760, maxSingleShuffleLimit=331769440, mergeThreshold=875871360, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2016-12-13 19:07:58,930 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - attempt_local626952993_0029_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2016-12-13 19:07:58,935 INFO [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] - localfetcher#29 about to shuffle output of map attempt_local626952993_0029_m_000000_0 decomp: 276 len: 280 to MEMORY 2016-12-13 19:07:58,936 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] - Read 276 bytes from map-output for attempt_local626952993_0029_m_000000_0 2016-12-13 19:07:58,937 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - closeInMemoryFile -> map-output of size: 276, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->276 2016-12-13 19:07:58,938 INFO [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] - EventFetcher is interrupted.. Returning 2016-12-13 19:07:58,940 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:58,940 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2016-12-13 19:07:58,949 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:58,949 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 272 bytes 2016-12-13 19:07:58,952 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merged 1 segments, 276 bytes to disk to satisfy reduce memory limit 2016-12-13 19:07:58,953 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 1 files, 280 bytes from disk 2016-12-13 19:07:58,953 INFO [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] - Merging 0 segments, 0 bytes from memory into reduce 2016-12-13 19:07:58,954 INFO [org.apache.hadoop.mapred.Merger] - Merging 1 sorted segments 2016-12-13 19:07:58,954 INFO [org.apache.hadoop.mapred.Merger] - Down to the last merge-pass, with 1 segments left of total size: 272 bytes 2016-12-13 19:07:58,956 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. *********** new pageRank value is 0.19734410911950637 0___________ *********** new pageRank value is 0.33353876774336305 1___________ *********** new pageRank value is 0.37429322529114195 1___________ *********** new pageRank value is 0.12175599053348046 0___________ 2016-12-13 19:07:58,969 INFO [org.apache.hadoop.mapred.Task] - Task:attempt_local626952993_0029_r_000000_0 is done. And is in the process of committing 2016-12-13 19:07:58,978 INFO [org.apache.hadoop.mapred.LocalJobRunner] - 1 / 1 copied. 2016-12-13 19:07:58,979 INFO [org.apache.hadoop.mapred.Task] - Task attempt_local626952993_0029_r_000000_0 is allowed to commit now 2016-12-13 19:07:58,996 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - Saved output of task 'attempt_local626952993_0029_r_000000_0' to file:/D:/Code/MyEclipseJavaCode/myMapReduce/out/pagerank/pr29/_temporary/0/task_local626952993_0029_r_000000 2016-12-13 19:07:58,999 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce > reduce 2016-12-13 19:07:58,999 INFO [org.apache.hadoop.mapred.Task] - Task 'attempt_local626952993_0029_r_000000_0' done. 2016-12-13 19:07:58,999 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Finishing task: attempt_local626952993_0029_r_000000_0 2016-12-13 19:07:58,999 INFO [org.apache.hadoop.mapred.LocalJobRunner] - reduce task executor complete. 2016-12-13 19:07:59,717 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local626952993_0029 running in uber mode : false 2016-12-13 19:07:59,718 INFO [org.apache.hadoop.mapreduce.Job] - map 100% reduce 100% 2016-12-13 19:07:59,719 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local626952993_0029 completed successfully 2016-12-13 19:07:59,728 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 34 File System Counters FILE: Number of bytes read=50472 FILE: Number of bytes written=11300358 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=4 Map output records=11 Map output bytes=252 Map output materialized bytes=280 Input split bytes=139 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=280 Reduce input records=11 Reduce output records=4 Spilled Records=22 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=21 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=1901068288 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=120 File Output Format Counters Bytes Written=114 zhouls.bigdata.myMapReduce.pagerank.RunJob$Mycounter my=2 success. 2 代码 package zhouls.bigdata.myMapReduce.pagerank; //pr值 import java.io.IOException; import java.util.Arrays; import org.apache.commons.lang.StringUtils; public class Node { private double pageRank=1.0; private String[] adjacentNodeNames; public static final char fieldSeparator = '\t';//分隔符 public double getPageRank() { return pageRank; } public Node setPageRank(double pageRank) { this.pageRank = pageRank; return this; } public String[] getAdjacentNodeNames() { return adjacentNodeNames; } public Node setAdjacentNodeNames(String[] adjacentNodeNames) { this.adjacentNodeNames = adjacentNodeNames; return this; }//这一大段的get和set,可以右键,source,产生get和set,自动生成。 public boolean containsAdjacentNodes() { return adjacentNodeNames != null && adjacentNodeNames.length>0; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(pageRank); if (getAdjacentNodeNames() != null) { sb.append(fieldSeparator) .append(StringUtils .join(getAdjacentNodeNames(), fieldSeparator)); } return sb.toString(); } //value =1.0 B D public static Node fromMR(String value) throws IOException { String[] parts = StringUtils.splitPreserveAllTokens( value, fieldSeparator); if (parts.length < 1) { throw new IOException( "Expected 1 or more parts but received " + parts.length); } Node node = new Node() .setPageRank(Double.valueOf(parts[0])); if (parts.length > 1) { node.setAdjacentNodeNames(Arrays.copyOfRange(parts, 1, parts.length)); } return node; } } package zhouls.bigdata.myMapReduce.pagerank;//网页排名 import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class RunJob { public static enum Mycounter{ my } public static void main(String[] args) { Configuration config =new Configuration(); // config.set("fs.defaultFS", "hdfs://HadoopMaster:9000"); // config.set("yarn.resourcemanager.hostname", "HadoopMaster"); double d =0.001; int i=0; while(true){ i++; try { config.setInt("runCount", i); FileSystem fs =FileSystem.get(config); Job job =Job.getInstance(config); job.setJarByClass(RunJob.class); job.setJobName("pr"+i); job.setMapperClass(PageRankMapper.class); job.setReducerClass(PageRankReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); job.setInputFormatClass(KeyValueTextInputFormat.class); // Path inputPath =new Path("hdfs://HadoopMaster:9000/pagerank/pagerank.txt"); Path inputPath =new Path("./data/pagerank/pagerank.txt"); if(i>1){ // inputPath =new Path("hdfs://HadoopMaster:9000/out/pagerank/pr"+(i-1)); inputPath =new Path("./out/pagerank/pr"+(i-1)); } FileInputFormat.addInputPath(job, inputPath); // Path outpath =new Path("hdfs://HadoopMaster:9000/out/pagerank/pr"+i); Path outpath =new Path("./out/pagerank/pr"+i); if(fs.exists(outpath)){ fs.delete(outpath, true); } FileOutputFormat.setOutputPath(job, outpath); boolean f= job.waitForCompletion(true); if(f){ System.out.println("success."); long sum= job.getCounters().findCounter(Mycounter.my).getValue(); System.out.println(sum); double avgd= sum/4000.0; if(avgd<d){ break; } } } catch (Exception e) { e.printStackTrace(); } } } static class PageRankMapper extends Mapper<Text, Text, Text, Text>{ protected void map(Text key, Text value, Context context) throws IOException, InterruptedException { int runCount= context.getConfiguration().getInt("runCount", 1); String page =key.toString(); Node node =null; if(runCount==1){ node =Node.fromMR("1.0"+"\t"+value.toString()); }else{ node =Node.fromMR(value.toString()); } context.write(new Text(page), new Text(node.toString()));//A:1.0 B D if(node.containsAdjacentNodes()){ double outValue =node.getPageRank()/node.getAdjacentNodeNames().length; for (int i = 0; i < node.getAdjacentNodeNames().length; i++) { String outPage = node.getAdjacentNodeNames()[i]; context.write(new Text(outPage), new Text(outValue+""));//B:0.5 D:0.5 } } } } static class PageRankReducer extends Reducer<Text, Text, Text, Text>{ protected void reduce(Text arg0, Iterable<Text> arg1, Context arg2) throws IOException, InterruptedException { double sum =0.0; Node sourceNode =null; for(Text i:arg1){ Node node =Node.fromMR(i.toString()); if(node.containsAdjacentNodes()){ sourceNode =node; }else{ sum=sum+node.getPageRank(); } } double newPR=(0.15/4)+(0.85*sum); System.out.println("*********** new pageRank value is "+newPR); //把新的pr值和计算之前的pr比较 double d= newPR -sourceNode.getPageRank(); int j=(int)( d*1000.0); j=Math.abs(j); System.out.println(j+"___________"); arg2.getCounter(Mycounter.my).increment(j);; sourceNode.setPageRank(newPR); arg2.write(arg0, new Text(sourceNode.toString())); } } } 本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6171162.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

Android 3.0 r1中文API文档(104) —— ViewTreeObserver

正文 一、结构 public final classViewTreeObserver extendsObject java.lang.Object android.view.ViewTreeObserver 二、概述 用于注册监听的视图树观察者 (observer) ,在视图树种全局事件改变时得到通知。这个全局事件不仅还包括整个树的布局,从绘画过程开始,触摸模式的改变等。 ViewTreeObserver 不能够被应用程序实例化,因为它是由视图提供,参照 getViewTreeObserver() 以查看更多信息。 三、内部类 interface ViewTreeObserver.OnGlobalFocusChangeListener 当在一个视图树中的焦点状态发生改变时,所要调用的回调函数的接口类 interface ViewTreeObserver.OnGlobalLayoutListener 当在一个视图树中全局布局发生改变或者视图树中的某个视图的可视状态发生改变时,所要调用的回调函数的接口类 interface ViewTreeObserver.OnPreDrawListener 当一个视图树将要绘制时,所要调用的回调函数的接口类 interface ViewTreeObserver.OnScrollChangedListener 当一个视图树中的一些组件发生滚动时,所要调用的回调函数的接口类 interface ViewTreeObserver.OnTouchModeChangeListener 当一个视图树的触摸模式发生改变时,所要调用的回调函数的接口类 四、公共方法 public voidaddOnGlobalFocusChangeListener(ViewTreeObserver.OnGlobalFocusChangeListener listener) 注册一个回调函数,当在一个视图树中的焦点状态发生改变时调用这个回调函数。 参数 listener将要被添加的回调函数 异常 IllegalStateException如果isAlive()返回false public voidaddOnGlobalLayoutListener(ViewTreeObserver.OnGlobalLayoutListener listener) 注册一个回调函数,当在一个视图树中全局布局发生改变或者视图树中的某个视图的可视状态发生改变时调用这个回调函数。 参数 listener将要被添加的回调函数 异常 IllegalStateException如果isAlive()返回false public voidaddOnPreDrawListener(ViewTreeObserver.OnPreDrawListener listener) 注册一个回调函数,当一个视图树将要绘制时调用这个回调函数。 参数 listener将要被添加的回调函数 异常 IllegalStateException如果isAlive()返回false public voidaddOnScrollChangedListener(ViewTreeObserver.OnScrollChangedListener listener) 注册一个回调函数,当一个视图发生滚动时调用这个回调函数。 参数 listener将要被添加的回调函数 异常 IllegalStateException如果isAlive()返回false public voidaddOnTouchModeChangeListener(ViewTreeObserver.OnTouchModeChangeListener listener) 注册一个回调函数,当一个触摸模式发生改变时调用这个回调函数。 参数 listener将要被添加的回调函数 异常 IllegalStateException如果isAlive()返回false public final voiddispatchOnGlobalLayout() 当整个布局发生改变时通知相应的注册监听器。如果你强制对视图布局或者在一个没有附加到一个窗口的视图的层次结构或者在GONE状态下,它可以被手动的调用 public final booleandispatchOnPreDraw() 当一个视图树将要绘制时通知相应的注册监听器。如果这个监听器返回true,则这个绘制将被取消并重新计划。如果你强制对视图布局或者在一个没有附加到一个窗口的视图的层次结构或者在一个GONE状态下,它可以被手动的调用 返回值 当前绘制能够取消并重新计划则返回true,否则返回false。 public booleanisAlive() 指示当前的ViewTreeObserver是否可用(alive)。当observer不可用时,任何方法的调用(除了这个方法)都将抛出一个异常。如果一个应用程序保持和ViewTreeObserver一个历时较长的引用,它应该总是需要在调用别的方法之前去检测这个方法的返回值。 返回值 但这个对象可用则返回true,否则返回false public voidremoveGlobalOnLayoutListener(ViewTreeObserver.OnGlobalLayoutListener victim) 移除之前已经注册的全局布局回调函数。 参数 victim将要被移除的回调函数 异常 IllegalStateException如果isAlive()返回false public voidremoveOnGlobalFocusChangeListener(ViewTreeObserver.OnGlobalFocusChangeListener victim) 移除之前已经注册的焦点改变回调函数。 参数 victim将要被移除的回调函数 异常 IllegalStateException 如果isAlive() 返回false public voidremoveOnPreDrawListener(ViewTreeObserver.OnPreDrawListener victim) 移除之前已经注册的预绘制回调函数。 参数 victim将要被移除的回调函数 异常 IllegalStateException 如果isAlive() 返回false public voidremoveOnScrollChangedListener(ViewTreeObserver.OnScrollChangedListener victim) 移除之前已经注册的滚动改变回调函数。 参数 victim将要被移除的回调函数 异常 IllegalStateException 如果isAlive() 返回false public voidremoveOnTouchModeChangeListener(ViewTreeObserver.OnTouchModeChangeListener victim) 移除之前已经注册的触摸模式改变回调函数 参数 victim将要被移除的回调函数 异常 IllegalStateException 如果isAlive() 返回false 本文转自over140 51CTO博客,原文链接:http://blog.51cto.com/over140/582383,如需转载请自行联系原作者

资源下载

更多资源
腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

WebStorm

WebStorm

WebStorm 是jetbrains公司旗下一款JavaScript 开发工具。目前已经被广大中国JS开发者誉为“Web前端开发神器”、“最强大的HTML5编辑器”、“最智能的JavaScript IDE”等。与IntelliJ IDEA同源,继承了IntelliJ IDEA强大的JS部分的功能。

用户登录
用户注册