spark.conf.set("spark.sql.parquet.compression.codec", "brotli")
df.write.format("delta").mode("overwrite").saveAsTable(table_name, path= delta_table_path)
< /code>
Сообщение об ошибке:
spark.conf.set("spark.sql.parquet.compression.codec "," brotli ")
df.write.format("delta") error occurred while calling o432.saveAsTable.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 5.0 failed 4 times, most recent failure: Lost task 3.3 in stage 5.0 (TID 53) (10.139.64.5 executor 0): org.apache.parquet.hadoop.BadConfigurationException: Class org.apache.hadoop.io.compress.brotlicodec не был найден на org.apache.parquet.hadoop.codecfactory.getCodec (codecfactory.java:254) at org.apache.parquet.hadooop.codecfactory $ ghupbytescompressor. org.apache.parquet.hadoop.codecfactory.createcompressor (codecfactory.java:219) на org.apache.parquet.hadoop.codecfactory.getCompressor (codecfactory.java:202)
at org.apache.parquet.hadoop.parquetrecordwriter. (parquetrecordwriter.java:152)
at org.apache.parquet.hadoop.parquetoutputformat.getRecordwriter (parquetoutputformat.java:565)
at org.apache.parquet.hadoop.parquetoutputformat.getRecordWriter (parquetOutputFormat.java:473)
at org.apache.parquet.hadoop.parquetoutputformat.getRecordWriter (parquetOutputformat.java:462)
att org.apache.spark.sql.execution.datasources.parquet.parquetoutputwriter. (Parquetoutputwriter.scala: 36)
at org.apache.spark.sql.execution.datasources.parquet.parquetutils $ $ 1.newinstance. /> at org.apache.spark.sql.execution.datasources.singledirectoryDataWriter.newOutputWriter (fileFormatDataWriter.scala: 205)
at org.apache.spark.sql.execution.datasources.singlectorectoryDataWriter. (FileFormatDataWriter.Scala: 187)
at org.apache.spark.sql.execution.datasources.fileformatwriter $. org.apache.spark.sql.execution.datasources.writefilesexec. $ anonfun $ doexecutewrite $ 1 (writefiles.scala: 125)
at org.apache.spark.rdd.rdd. $ anonfun $ mappatitionsinternal $ 2 (rdd.scala: 938). org.apache.spark.rdd.rdd. $ anonfun $ maptitionsinternal $ 2 $ адаптирован (rdd.scala: 938)
at org.apache.spark.rdd.mappartitionsrdd.compute (mappartitionsrdd.scala: 60)
8 spark.conf.set ("spark.sql.parquet.compression.codec", "brotli")
---> 10 df.write.format ("delta"). Mode ("Overwrite"). Saveastable (table_name, path = delta_table) < /p>
Подробнее здесь: https://stackoverflow.com/questions/795 ... e-15-4-lts
Невозможно использовать сжатие Brotli в Databricks Databricks 15.4 LTS ⇐ JAVA
-
- Похожие темы
- Ответы
- Просмотры
- Последнее сообщение
-
-
Сервер сломался, когда я обновил Ubuntu с 16.04 LTS до 18.04 LTS [закрыто]
Anonymous » » в форуме Php - 0 Ответы
- 92 Просмотры
-
Последнее сообщение Anonymous
-