Skip to content

Expose a try_close() (or equivalent) method for File #141

@andreascasapu

Description

@andreascasapu

NOTE: The text below is copied from this issue that was accidentally opened in hdfs-sys. It was intended for hdrs.

The docs specify:

File will hold the underlying pointer to hdfsFile.

The internal file will be closed while Drop, so their is no need to close it manually.

However, I have run into issues with the library, as sometimes closing files fails, but nothing is reported back into the Rust code. Instead, the logs show something like

[[REDACTED]] WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File [[REDACTED]] could only be written to 0 of the 1 minReplication nodes. There are [[REDACTED]] datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2706)
[...]

Hence, I would suggest exposing a try_close() method (or equivalent) to allow users to manually close the file and handle the errors. I was thinking something like

impl File {
    fn try_close(self) -> Result<()> {
        unsafe {
            let error_code = hdfsCloseFile(self.fs, self.f);
            // hdfsCloseFile will free self.f no matter success or failed.
            self.f = ptr::null_mut();
            if error_code != 0 {
                return Err(Error::last_os_error());
            }
        }
        Ok(())
    }
}

I can give it a shot to implement myself, if you take contributions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions