首页 / 浏览问题 / 组件GIS / 问题详情
spark组件保存FRDD到Oracle数据库,Session dead
16EXP 2023年05月22日
以下为主要代码

val params1 = new util.HashMap[String, java.io.Serializable]()

params1.put(SDXFeatureRDDProviderParams.DBType.key, "oracleplus");

params1.put(SDXFeatureRDDProviderParams.Server.key, "192.168.***.**:1521/ORCL")

params1.put(SDXFeatureRDDProviderParams.User.key, "****")

params1.put(SDXFeatureRDDProviderParams.PassWord.key, "****")

params1.put(FeatureRDDProviderParams.ProviderType.key, "SDX")

FeatureRDDProviderFactory(params1).save(loadFRDD , params1,"SAVEORACLEFRDD")

通过postman发送请求可知,导致Session Dead的代码为:FeatureRDDProviderFactory(params1).save(loadFRDD , params1,"SAVEORACLEFRDD")

连接参数确认无误,params1中也有连接参数

但执行到最后一行代码时,spark session的state 变为了dead

1个回答

理论上连接oracleplus数据源失败只是导致job失败,不至于spark上下文对象的摧毁。建议idea开发环境本地调试下看下具体报错原因,排查下是否因为缺少oracle客户端环境变量。
1,555EXP 2023年05月23日
本地调试能够成功保存

以下是saprk报错信息
java.lang.NullPointerException: 循环请求服务出错!错误原因 :{} at cn.gtmap.gtc.bpmnio.define.service.workers.livy.SparkJobHandler.postStatement(SparkJobHandler.java:425) at cn.gtmap.gtc.bpmnio.define.service.workers.livy.SparkJobHandler.createInstances(SparkJobHandler.java:99) at cn.gtmap.gtc.bpmnio.define.service.workers.AbstractJobHandler.handle(AbstractJobHandler.java:56) at cn.gtmap.gtc.bpmnio.define.service.workers.livy.SparkJobHandler.handle(SparkJobHandler.java:60) at sun.reflect.GeneratedMethodAccessor1129.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at io.zeebe.spring.client.bean.MethodInfo.invoke(MethodInfo.java:31) at io.zeebe.spring.client.config.processor.ZeebeWorkerPostProcessor.lambda$null$1(ZeebeWorkerPostProcessor.java:49) at io.zeebe.client.impl.worker.JobRunnableFactory.executeJob(JobRunnableFactory.java:44) at io.zeebe.client.impl.worker.JobRunnableFactory.lambda$create$0(JobRunnableFactory.java:39) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
从报错来看,并未字段报错在spark层面,而是上spring封装层。推测是spark与spring集成的问题,建议将spark保存rdd到oracle代码在单元测试里跑下是否能够正常,如果正常说明sdk 接口和参数设置均无问题。从而可判断为spring集成机制的问题。
...