您现在的位置是:首页 > 文章详情

spark 按照key 分组 然后统计每个key对应的最大、最小、平均值思路——使用groupby,或者reduceby

日期:2017-10-31点击:506
复制代码
What you're getting back is an object which allows you to iterate over the results. You can turn the results of groupByKey into a list by calling list() on the values, e.g.  example = sc.parallelize([(0, u'D'), (0, u'D'), (1, u'E'), (2, u'F')]) example.groupByKey().collect() # Gives [(0, <pyspark.resultiterable.ResultIterable object ......]  example.groupByKey().map(lambda x : (x[0], list(x[1]))).collect() # Gives [(0, [u'D', u'D']), (1, [u'E']), (2, [u'F'])]

# OR:
example.groupByKey().mapValues(list)
 
复制代码
复制代码
Hey Ron, It was pretty much exactly as Sean had depicted. I just needed to provide count an anonymous function to tell it which elements to count. Since I wanted to count them all, the function is simply "true". val grouped = rdd.groupByKey().mapValues { mcs => val values = mcs.map(_.foo.toDouble) val n = values.count(x => true) val sum = values.sum val sumSquares = values.map(x => x * x).sum val stddev = math.sqrt(n * sumSquares - sum * sum) / n print("stddev: " + stddev) stddev } I hope that helps
复制代码

 

 

Just don't. Use reduce by key:  lines.map(lambda x: (x[1][0:4], (x[0], float(x[3])))).map(lambda x: (x, x)) \ .reduceByKey(lambda x, y: ( min(x[0], y[0], key=lambda x: x[1]), max(x[1], y[1], , key=lambda x: x[1])))

 
















本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/bonelee/p/7156188.html,如需转载请自行联系原作者


原文链接:https://yq.aliyun.com/articles/396685
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章