Derokorian;1241057 wrote:The reason its faster with filezilla is because it can create multiple streams. You may try forking the process, giving each fork a batch of files to process, this will allow the job to be running with multiple down streams and greatly decrease the total transfer time.
Multiple streams within the ftp-protocol? Is that even possible? Filezilla are able to transfer multiple files, but the protocol doesn't allow multiple threads for one and the same file (if I understood you correctly) as far as I know.
I'm already transferring many files at the same time in the system (40-100 simultaneous uploads depending on receiving server), so all in all the speed is not bad. But it could be much, much faster.
I've also tried to upload using cUrl, but that is equally slow. I will also try with socket operations and see if I manage uploads on that level.
Note that ftp downloads (ftp_get) is as fast as Filezilla, so there's a huge difference in ftp_get and ftp_put.
Could it still be the servers that are using som kind of limit if the connection is not from a known client or something? I'm trying to find servers that I do not see this on, but so far they've all been the same.
To "eval", any script examples would be nice. By the way, why do you think I copy crap from the server to my computer and back to the server?
Thanks
Lubox