impala常见错误小结
线上使用impala做一部分的nginx日志实时计算,简单记录下在使用过程中遇到的一些小问题:
Operation category READ is not supported in state standby
- dfs.client.read.shortcircuit is not enabled.
ERROR: block location tracking is not properly enabled because
- dfs.client.file-block-storage-locations.timeout is too low. It should be at least 3000.
E0127 19:28:25.290117 13469 impala-server.cc:341] Aborting Impala Server startup due to improper configuration
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
<
property
>
<
name
>dfs.client.read.shortcircuit</
name
>
<
value
>true</
value
>
</
property
>
<
property
>
<
name
>dfs.domain.socket.path</
name
>
<
value
>/var/run/hadoop-hdfs/dn._PORT</
value
>
</
property
>
<
property
>
<
name
>dfs.client.file-block-storage-locations.timeout</
name
>
<
value
>3000</
value
>
</
property
>
<
property
>
<
name
>dfs.datanode.hdfs-blocks-metadata.enabled</
name
>
<
value
>true</
value
>
</
property
>
|
ERROR: MetaException: Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=impala, access=WRITE, inode="/bip/hive_warehouse/cdnlog.db":hdfs:hdfs:drwxr-xr-x
|
1
2
3
4
5
6
7
8
|
ERROR: Failed to open HDFS file hdfs:
//bipcluster/bip/hive_warehouse/cdnlog.db/dd_log/dt=20140117/data.file
Error(
255
): Unknown error
255
hdfsOpenFile(hdfs:
//bipcluster/bip/hive_warehouse/cdnlog.db/dd_log/dt=20140117/data.file): FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;) error:
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:
565
)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:
1115
)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:
249
)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:
82
)
|
Query: select avg(status) from dd_log where dt='20140117'
ERROR: AnalysisException: AVG requires a numeric or timestamp parameter: AVG(status)