Add a small delay to Cloud FTP ingestion.
In some systems (including some copiers' scan function) there is a race to verify the file exists and its byte count is correct before the file disappears from the upload directory.
Currently, documents uploaded to the Cloud FTP service are sometimes snarfed up too fast. So by the time the directory listing refreshes, the file is gone. This prevents A) knowing all is well; B) resuming a failed upload; C) overwriting a failed upload. Instead, there is no file, so the verification fails and the file is uploaded again. This results in duplicates at the destination.
Adding a short delay (20s for example) between the end of the FTP upload and the start of the internal file move would allow enough time for this check to happen.
It would also allow the client enough time to detect a stopped transfer so that it could check to see what happened and resume from there or overwrite entirely, rather than ingesting both the partially transferred file and then the retry of it.
Nate French commented
Furthermore, the value of this delay should be published so that systems can be configured to know how soon to check on a transfer which stopped. Assuming 20s delay, if no data has been transferred in the last 12 seconds, the client can know to hurry up and refresh the directory so that it can resume / overwrite / delete and retry the transfer.