PhoenixQA
# Q&A
1、执行select fk_contact_source from fk_riskcontrol_customer_contacts where fk_user_idcard_no='330108198301111717' order by fk_contact_source asc, fk_call_total desc, fk_updated_time desc,fk_user_phone asc;
SQL 语句的时候,报出
org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException
A: 在 Phoenix 中 where 语句之后的条件,包括 order by ,都需要在查询字段中体现出来。
正解
select fk_user_idcard_no,fk_contact_source,fk_call_total,fk_user_phone,fk_updated_time from fk_riskcontrol_customer_contacts where fk_user_idcard_no='330108198301111717' order by fk_contact_source asc, fk_call_total desc, fk_updated_time desc,fk_user_phone asc;
2、java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger than maximum allowed number of bytes
A: 批量提交数据太多,超出默认值
修改 phoenix 的 bin 目录下的 hbase-site.xml,添加如下配置:
<property>
<name>phoenix.mutate.batchSize</name>
<value>5000</value>
</property>
<property>
<name>phoenix.mutate.maxSize</name>
<value>2500000</value>
</property>
<property>
<name>phoenix.mutate.maxSizeBytes</name>
<value>1048576000</value>
</property>
2
3
4
5
6
7
8
9
10
11
12
phoenix.mutate.maxSizeBytes 默认:104857600 =100M
详情:https://phoenix.apache.org/tuning.html
3、
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: DOP_VISIT_INFO,,1543980375599.34bc6c7dd2e1aa2ff70a9c2617aa3cbd.: null
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:80)
at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:213)
at org.apache.phoenix.iterate.RegionScannerResultIterator.next(RegionScannerResultIterator.java:61)
at org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:255)
at org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:199)
at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.getTopNScanner(NonAggregateRegionScannerFactory.java:322)
at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.getRegionScanner(NonAggregateRegionScannerFactory.java:168)
at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:81)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:225)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:273)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3136)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3373)
at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.NullPointerException
at org.apache.phoenix.execute.TupleProjector.projectResults(TupleProjector.java:282)
at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:203)
... 15 more
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:359)
at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:347)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:344)
at org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242)
at org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:387)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:361)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
A: 没有索引,创建索引,使用排序索引,提高检索速度
4、
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Error when adding cache: : java.sql.SQLException: ERROR 999 (50M01): Unable to allocate enough memory. Requested memory of 122633169 bytes is larger than global pool of 69343641 bytes.
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114)
2
3
A: hbase 的 conf 目录,hbase-site.xml 增加配置:
<property>
<name>phoenix.query.maxGlobalMemoryPercentage</name>
<value>30</value>
</property>
2
3
4
默认是 15,百分比。
5、开启 debug 日志模式
Append following properties in /usr/hdp/current/phoenix-client/bin/log4j.properties and change log4j.threshold=TRACE log4j.logger.org.apache.phoenix=TRACE log4j.logger.org.apache.hadoop.hbase.ipc=TRACE
log4j.logger.org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel=DEBUG
6、更新 Apache phoenix stats
update statistics tableName ALL;
https://community.hortonworks.com/questions/194712/apache-phoenix-stats-are-not-being-updated.html
https://sematext.com/opensee/big-data?project=Phoenix&q=UPDATE+STATISTICS