Skip to content Skip to sidebar Skip to footer

Dynamically Folder Creation In S3 Bucket From Pyspark Job

I am writing data into s3 bucket and creating parquet files using pyspark . MY bucket structure looks like below: s3a://rootfolder/subfolder/table/ subfolder and table these two f

Solution 1:

s3a connector (org.apache.hadoop.fs.s3a.S3AFileSystem) doesn't create $folder$ files. It generates directory markers as path + /, . For example, mkdir s3a://bucket/a/b creates a zero bytes marker object /a/b/. This differentiates it from a file, which would have the path /a/b

  1. If, locally, you are using the s3n: URL. Stop it. use the S3a connector.
  2. If you have been setting the fs.s3a.impl option: stop it. hadoop knows what to use, and it uses the S3AFileSystem class
  3. If you are seeing them and you are running EMR, that's EMR's connector. Closed source, out of scope.

Solution 2:

Generally, as it was mentioned in the comments on s3 everything is either Bucket or Object: However, the folder structure is more a visual representation and not an actual hierarchy like in a traditional filesystem.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html
For this reason, you have to only create the Buckets and don't need to create the folders. It will only fail if the bucket+key combination already exists.

About the _$folder$ I'm not sure, I haven't seen those, it seems its created by Hadoop: https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-empty-files/
Junk Spark output file on S3 with dollar signs
How can I configure spark so that it creates "_$folder$" entries in S3?

About the _SUCCESS file: This basically indicates, that your job is completed successfully. Your can disable it with :

sc.hadoopConfiguration.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")

Post a Comment for "Dynamically Folder Creation In S3 Bucket From Pyspark Job"