Hive UDF开发
HIVE允许用户使用UDF(user defined function)对数据进行处理。
用户可以使用‘show functions’ 查看function list,可以使用'describe function function-name'查看函数说明。
[plain] view plaincopy
-
hive> show functions;
-
OK
-
!
-
!=
-
......
-
Time taken: 0.275 seconds
-
hive> desc function substr;
-
OK
-
substr(str, pos[, len]) - returns the substring of str that starts at pos and is of length len orsubstr(bin, pos[, len]) - returns the slice of byte array that starts at pos and is of length len
-
Time taken: 0.095 seconds
hive提供的build-in函数包括以下几类:
1. 关系操作符:包括 = 、 <> 、 <= 、>=等
2. 算数操作符:包括 + 、 - 、 *、/等
3. 逻辑操作符:包括AND 、 && 、 OR 、 || 等
4. 复杂类型构造函数:包括map、struct、create_union等
5. 复杂类型操作符:包括A[n]、Map[key]、S.x
6. 数学操作符:包括ln(double a)、sqrt(double a)等
7. 集合操作符:包括size(Array<T>)、sort_array(Array<T>)等
8. 类型转换函数: binary(string|binary)、cast(expr as <type>)
9. 日期函数:包括from_unixtime(bigint unixtime[, string format])、unix_timestamp()等
10.条件函数:包括if(boolean testCondition, T valueTrue, T valueFalseOrNull)等
11. 字符串函数:包括acat(string|binary A, string|binary B...)等
12. 其他:xpath、get_json_objectscii(string str)、con
编写Hive UDF有两种方式:
1. extends UDF , 重写evaluate方法
2. extends GenericUDF,重写initialize、getDisplayString、evaluate方法
编写UDF代码实例(更多例子参考https://svn.apache.org/repos/asf/hive/tags/release-0.8.1/ql/src/java/org/apache/hadoop/hive/ql/udf/):
功能:大小转小写
ToLowerCase.java:
[plain] view plaincopy
-
package test.udf;
-
-
import org.apache.hadoop.hive.ql.exec.UDF;
-
import org.apache.hadoop.io.Text;
-
-
public class ToLowerCase extends UDF {
-
public Text evaluate(final Text s) {
-
if (s == null) { return null; }
-
return new Text(s.toString().toLowerCase());
-
}
-
}
功能:计算array中去重后元素个数
UDFArrayUniqElementNumber .java
[java] view plaincopy
-
package test.udf;
-
import org.apache.hadoop.hive.ql.exec.Description;
-
import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
-
import org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException;
-
import org.apache.hadoop.hive.ql.metadata.HiveException;
-
import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
-
import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInspector;
-
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
-
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils;
-
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector.Category;
-
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;
-
import org.apache.hadoop.io.IntWritable;
-
-
/**
-
* UDF:
-
* Get nubmer of objects with duplicate elements eliminated
-
* @author xiaomin.zhou
-
*/
-
@Description(name = "array_uniq_element_number", value = "_FUNC_(array) - Returns nubmer of objects with duplicate elements eliminated.", extended = "Example:\n"
-
+ " > SELECT _FUNC_(array(1, 2, 2, 3, 3)) FROM src LIMIT 1;\n" + " 3")
-
public class UDFArrayUniqElementNumber extends GenericUDF {
-
-
private static final int ARRAY_IDX = 0;
-
private static final int ARG_COUNT = 1; // Number of arguments to this UDF
-
private static final String FUNC_NAME = "ARRAY_UNIQ_ELEMENT_NUMBER"; // External Name
-
-
private ListObjectInspector arrayOI;
-
private ObjectInspector arrayElementOI;
-
private final IntWritable result = new IntWritable(-1);
-
-
public ObjectInspector initialize(ObjectInspector[] arguments)
-
throws UDFArgumentException {
-
-
// Check if two arguments were passed
-
if (arguments.length != ARG_COUNT) {
-
throw new UDFArgumentException("The function " + FUNC_NAME
-
+ " accepts " + ARG_COUNT + " arguments.");
-
}
-
-
// Check if ARRAY_IDX argument is of category LIST
-
if (!arguments[ARRAY_IDX].getCategory().equals(Category.LIST)) {
-
throw new UDFArgumentTypeException(ARRAY_IDX, "\""
-
+ org.apache.hadoop.hive.serde.Constants.LIST_TYPE_NAME
-
+ "\" " + "expected at function ARRAY_CONTAINS, but "
-
+ "\"" + arguments[ARRAY_IDX].getTypeName() + "\" "
-
+ "is found");
-
}
-
-
arrayOI = (ListObjectInspector) arguments[ARRAY_IDX];
-
arrayElementOI = arrayOI.getListElementObjectInspector();
-
-
return PrimitiveObjectInspectorFactory.writableIntObjectInspector;
-
}
-
-
public IntWritable evaluate(DeferredObject[] arguments)
-
throws HiveException {
-
-
result.set(0);
-
-
Object array = arguments[ARRAY_IDX].get();
-
int arrayLength = arrayOI.getListLength(array);
-
if (arrayLength <= 1) {
-
result.set(arrayLength);
-
return result;
-
}
-
-
//element compare; Algorithm complexity: O(N^2)
-
int num = 1;
-
int i, j;
-
for(i = 1; i < arrayLength; i++)
-
{
-
Object listElement = arrayOI.getListElement(array, i);
-
for(j = i - 1; j >= 0; j--)
-
{
-
if (listElement != null) {
-
Object tmp = arrayOI.getListElement(array, j);
-
if (ObjectInspectorUtils.compare(tmp, arrayElementOI, listElement,
-
arrayElementOI) == 0) {
-
break;
-
}
-
}
-
}
-
if(-1 == j)
-
{
-
num++;
-
}
-
}
-
-
result.set(num);
-
return result;
-
}
-
-
public String getDisplayString(String[] children) {
-
assert (children.length == ARG_COUNT);
-
return "array_uniq_element_number(" + children[ARRAY_IDX]+ ")";
-
}
-
}
生成udf.jar
hive有三种方法使用自定义的UDF函数
1. 临时添加UDF
如下:
[plain] view plaincopy
-
hive> select * from test;
-
OK
-
Hello
-
wORLD
-
ZXM
-
ljz
-
Time taken: 13.76 seconds
-
hive> add jar /home/work/udf.jar;
-
Added /home/work/udf.jar to class path
-
Added resource: /home/work/udf.jar
-
hive> create temporary function mytest as 'test.udf.ToLowerCase';
-
OK
-
Time taken: 0.103 seconds
-
hive> show functions;
-
......
-
mytest
-
......
-
hive> select mytest(test.name) from test;
-
......
-
OK
-
hello
-
world
-
zxm
-
ljz
-
Time taken: 38.218 seconds
这种方式在会话结束后,函数自动销毁,因此每次打开新的会话,都需要重新add jar并且create temporary function
2. 进入会话前自动创建
使用hive -i参数在进入hive时自动初始化
[plain] view plaincopy
-
$ cat hive_init
-
add jar /home/work/udf.jar;
-
create temporary function mytest as 'test.udf.ToLowerCase';
-
$ hive -i hive_init
-
Logging initialized using configuration in file:/home/work/hive/hive-0.8.1/conf/hive-log4j.properties
-
Hive history file=/tmp/work/hive_job_log_work_201209200147_1951517527.txt
-
hive> show functions;
-
......
-
mytest
-
......
-
hive> select mytest(test.name) from test;
-
......
-
OK
-
hello
-
world
-
zxm
-
ljz
方法2和方法1本质上是相同的,区别在于方法2在会话初始化时自动完成
3. 自定义UDF注册为hive内置函数
可参考:hive利器 自定义UDF+重编译hive
和前两者相比,第三种方式直接将用户的自定义函数作为注册为内置函数,未来使用起来非常简单,但这种方式也非常危险,一旦出错,将是灾难性的,因此,建议如果不是特别通用,并且固化下来的函数,还是使用前两种方式比较靠谱。