apache-spark - 无法使用 Apache Hudi 编写非分区表

我正在使用 Apache Hudi 将非分区表写入 AWS S3 并将其同步到配置单元。这是正在使用的 DataSourceWriteOptions

val hudiOptions: Map[String, String] = Map[String, String](
      DataSourceWriteOptions.TABLE_TYPE_OPT_KEY -> "MERGE_ON_READ",
      DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "PERSON_ID",
      DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> "",
      DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "UPDATED_DATE",
      DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "",
      DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> classOf[NonPartitionedExtractor].getName,
      DataSourceWriteOptions.HIVE_STYLE_PARTITIONING_OPT_KEY -> "true",
      DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY -> "org.apache.hudi.keygen.NonpartitionedKeyGenerator"
    )

如果已分区,则可以成功写入表,但如果我尝试写入未分区表,则会出错。这是错误输出片段

Caused by: java.lang.NullPointerException
        at org.apache.hudi.hadoop.utils.HoodieInputFormatUtils.getTableMetaClientForBasePath(HoodieInputFormatUtils.java:283)
        at org.apache.hudi.hadoop.InputPathHandler.parseInputPaths(InputPathHandler.java:100)
        at org.apache.hudi.hadoop.InputPathHandler.<init>(InputPathHandler.java:60)
        at org.apache.hudi.hadoop.HoodieParquetInputFormat.listStatus(HoodieParquetInputFormat.java:81)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:288)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.rdd.RDD.getNumPartitions(RDD.scala:289)
        at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.mapOutputStatisticsFuture$lzycompute(ShuffleExchangeExec.scala:83)
        at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.mapOutputStatisticsFuture(ShuffleExchangeExec.scala:82)
        at org.apache.spark.sql.execution.adaptive.ShuffleQueryStageExec.cancel(QueryStageExec.scala:152)
        at org.apache.spark.sql.execution.adaptive.MaterializeExecutable.cancel(AdaptiveExecutable.scala:357)
        at org.apache.spark.sql.execution.adaptive.AdaptiveExecutorRuntime.fail(AdaptiveExecutor.scala:280)
        ... 41 more

这是 HoodieInputFormatUtils.getTableMetaClientForBasePath() 的代码

/**
   * Extract HoodieTableMetaClient from a partition path(not base path).
   * @param fs
   * @param dataPath
   * @return
   * @throws IOException
   */
  public static HoodieTableMetaClient getTableMetaClientForBasePath(FileSystem fs, Path dataPath) throws IOException {
    int levels = HoodieHiveUtils.DEFAULT_LEVELS_TO_BASEPATH;
    if (HoodiePartitionMetadata.hasPartitionMetadata(fs, dataPath)) {
      HoodiePartitionMetadata metadata = new HoodiePartitionMetadata(fs, dataPath);
      metadata.readFromFS();
      levels = metadata.getPartitionDepth();
    }
    Path baseDir = HoodieHiveUtils.getNthParent(dataPath, levels);
    LOG.info("Reading hoodie metadata from path " + baseDir.toString());
    return new HoodieTableMetaClient(fs.getConf(), baseDir.toString());
  }

第 283 行是导致 NullPointerException 的 LOG.info()。所以看起来为分区提供的配置值已经搞砸了。此代码正在 AWS EMR 上运行。

Release label:emr-5.30.1
Hadoop distribution:Amazon 2.8.5
Applications:Hive 2.3.6, Spark 2.4.5

最佳答案

我怀疑 PARTITIONPATH_FIELD_OPT_KEY 和 HIVE_PARTITION_FIELDS_OPT_KEY 应该保持未定义状态。 要验证您的配置,我建议转到 https://doc.hcs.huawei.com/usermanual/mrs/mrs_01_24035.html

hoodie.datasource.write.partitionpath.field 和 hoodie.datasource.hive_sync.partition_fields 应该是空白的

hoodie.datasource.write.keygenerator.class -> org.apache.hudi.keygen.NonpartitionedKeyGenerator

hoodie.datasource.hive_sync.partition_extractor_class->org.apache.hudi.hive.NonPartitionedExtractor

我在使用 Hudi 0.9.0 的 pySpark 上遇到了配置单元同步问题,上述文档有所帮助。

https://stackoverflow.com/questions/64457298/

相关文章:

ansible - Ansible 中角色的串行执行

reactjs - 如何更改点击 react 的类(class)?

firebase - 使用 Mockito 为 Firebase 用户身份验证设置单元测试

python-3.x - python 中的 sys.exit 给出一个 SystemExit 错误

r - 以编程方式将 R Shiny 应用程序作为后台作业启动

dart - Dart 中的不同流

hyperledger-fabric - super 账本结构 2.0 : Chaincode co

ios - 带有 iOS 14 设备的 Xcode 10 - 尝试与此设备通信时遇到错误。 (该服务

javascript - 从 MapBox GL JS 中的聚类中排除某些功能

reactjs - 如何始终覆盖具有不确定后缀的 JSS 样式?