在调用spark的sample算子时,对应的方法为:
sample(withReplacement : scala.Boolean, fraction : scala.Double,seed scala.Long)
sample算子是用来抽样用的,其有3个参数
withReplacement:表示抽出样本后是否在放回去,true表示会放回去,这也就意味着抽出的样本可能有重复
fraction :抽出多少,这是一个double类型的参数,0-1之间,eg:0.3表示抽出30%
seed:表示一个种子,根据这个seed随机抽取,一般情况下只用前两个参数就可以,那么这个参数是干嘛的呢,这个参数一般用于调试,有时候不知道是程序出问题还是数据出了问题,就可以将这个参数设置为定值
下面是代码:
大概思路是:通过抽样取出一部分样本,在对样本做wordCount并排序最后取出出现次数最多的key,这个key就是导致数据倾斜的key
package cn.test.spark
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD
object WordCount{
def main(args: Array[String]): Unit = {
val conf=new SparkConf().setAppName("WordCount").setMaster("local[2]")
val sc =new SparkContext(conf)
val keys = getKeyBySample(sc)
System.out.println("导致数据倾斜的key是:" + keys)
}
def getKeyBySample(sc:SparkContext)={
val data = Array("A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A",
"A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A",
"A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A",
"B","B","B","B","B","B","B","B","C","D","E","F","G")
val rdd= sc.parallelize(data)
val tupleRdd: Array[(Int, String)] =rdd.map(line=>(line,1)) //变成 (word,1) 的形式
.sample(true,0.4) //采样,取40%做样本
.reduceByKey((x,y)=>x+y) //单词统计 结果类似 (A,8),(B,18)
.map(line=>(line._2,line._1)) //==>交换顺序(8,A) ,(18,B) 为了便于后面把单词次数多的,排序筛选出来
.sortBy(line=>line._1,false,2) //排序,按照单词频次倒排
.take(3) //取出来单词数前三的 ,这些可能就是脏key 或者热点key
for(ele <- tupleRdd){
println(ele._2+"===的个数为======"+ele._1)
}
}
}
————————————————
版权声明:本文为CSDN博主「功夫老五」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_39031707/article/details/98324574