首页 / 浏览问题 / 组件 / 问题详情
iobjects spark 写出UDB数据失败
3月4

使用编程指南中给的sdx数据源参数写出udb数据类型报错, supermap iobjects spark版本是10.1.0, OS是linux-x86_64

Exception in thread "main" java.lang.IllegalArgumentException: open datasource failed
        at com.supermap.bdt.rddprovider.sdx.SDXUtils$.getDataSource(SDXUtils.scala:94)
        at com.supermap.bdt.rddprovider.sdx.SDXFeatureRDDProvider.createSchema(SDXFeatureRDDProvider.scala:118)
        at com.supermap.bdt.FeatureRDDProvider$class.save(FeatureRDDProvider.scala:74)
        at com.supermap.bdt.rddprovider.sdx.SDXFeatureRDDProvider.save(SDXFeatureRDDProvider.scala:34)
        at com.dist.xdata.bdp.test.test0304$.main(test0304.scala:27)
        at com.dist.xdata.bdp.test.test0304.main(test0304.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

,

1个回答

是集群模式还是单节点模式啊?

集群模式写出:不推荐udb和udbx数据源(对多进程及并发写不支持),建议用数据库型数据源,如postgis数据源。

单节点模式写出:建议用udbx,示例代码如下:

val params1 = new java.util.HashMap[String, java.io.Serializable]()
params1.put(SDXFeatureRDDProviderParams.DBType.key, "UDBX")
params1.put(SDXFeatureRDDProviderParams.Server.key, filePath)
new SDXFeatureRDDProvider().save(resultRdd, params1, "test1all")
杨兵  (1,140分)  名扬四海
3月4
...