HTTP - client able to stop uploading file while getting server response

Currently in normal file upload process using java’s HTTPUrlConnection, file is being uploaded via OutputStream and then call getResponse() to get server’s response for uploaded file status.

But there is a big issue in error cases such as the requests need to be throttled or file is too large, or server is unavailable etc that client would upload the whole file and lost of bytes are wasted. In this case, we need client to stop the upload early, the ideal case is client to listen to server’s response during uploading file.

But seems like the HTTPsUrlConnection or some libraries like OkHttp is not able to handle this, maybe due to the HTTP protocol limitations.

I have couple of options.

  1. Call to get response with 1xx before uploading a file - but it still will have error during upload and waste byte.
  2. Chunk uploads, seems like this is how Google is doing for uploading file to cloud, where each chunk can be requested separately and call the response code from server.
  3. Switch to other protocol, like websocket.

Wondering is there any other recommendations that I am missing? Or any feedbacks on existing options. Thanks!

You are facing a common issue with large file uploads over HTTP, particularly around handling errors during the upload process. The current flow with HttpURLConnection or OkHttp doesn’t allow the client to easily stop the upload or receive status updates from the server during the upload. There are a few approaches you could consider to improve this situation. Let’s go through the options you’ve already mentioned and explore some additional ones:

1. 1xx Responses Before Uploading (Pre-Upload Response Check)

While it’s true that you can try to use HTTP 1xx (Informational) responses to get early feedback from the server, the challenge is that HTTP 1xx responses are not designed for large file uploads. The client typically has to send the file data first, and then the server might respond with a 2xx (success) or 5xx (error) status after the entire upload is completed, which makes this approach inefficient when dealing with large file uploads.

  • Pros: Helps with early feedback in some cases but doesn’t solve the problem of wasted bytes during upload.
  • Cons: Still requires the entire file to be uploaded before an error is detected, so this doesn’t really solve your problem of stopping uploads early in case of issues like throttling, file size limits, or server unavailability.

2. Chunked Uploads (Resumable Uploads)

This is the method employed by many cloud services like Google Cloud, Amazon S3, and others. Instead of uploading the entire file in one go, you upload the file in smaller chunks. Each chunk is sent independently, and the server responds after each chunk is uploaded, allowing for error handling and stopping the upload early if needed. Additionally, if an error occurs during the upload, only the chunk that failed needs to be retried, not the whole file.

  • Pros:
    • You can stop the upload process early if needed without wasting the entire file.
    • Resumable uploads mean that if the connection is lost, you can resume from the last successfully uploaded chunk.
    • More flexible error handling for each chunk.
  • Cons:
    • More complex to implement as you need to manage chunking, retries, and state tracking.
    • Some overhead for managing the multiple HTTP requests.

There are libraries that can simplify chunked uploads, such as Apache HTTP Client with multipart upload or OkHttp with custom configurations. Google Cloud and AWS also provide SDKs with built-in support for resumable uploads.

Recommendation: This approach is generally a good one and solves the core issue you’re facing. If the file is too large, it’s better to break it up into smaller parts, which also improves resilience.

3. Switch to WebSockets

WebSocket is a full-duplex communication protocol over a single, long-lived connection, and it might be a good choice for real-time file uploads where you need continuous communication between the client and server. Using WebSockets, you could send chunks of data (or even streaming data), and the server could provide real-time feedback. This would allow the client to stop the upload process based on server feedback at any time.

  • Pros:
    • Real-time feedback from the server.
    • More control over upload behavior, and can stop early.
    • Can potentially reduce the latency of communication between client and server.
  • Cons:
    • WebSockets are not natively supported by HTTP, so you’ll need to either use a WebSocket server or implement WebSocket support in your current server architecture.
    • Requires more infrastructure and potentially more complex setup than HTTP-based solutions.
    • Not as widely used for file uploads as HTTP-based methods, so there could be compatibility or scaling issues in some environments.

Recommendation: WebSocket is a viable option but may require substantial infrastructure changes and might not be as commonly used for file uploads as HTTP-based protocols.

4. HTTP/2 or HTTP/3 (Multiplexed Streams)

Both HTTP/2 and HTTP/3 support multiplexed streams, meaning multiple requests and responses can be sent over a single connection in parallel. With HTTP/2, you can send the file in multiple streams (i.e., chunks) over a single connection, and each stream can get an individual response. This means that you could start receiving responses before the full file is uploaded, allowing for better error detection and cancellation if needed.

  • Pros:
    • More efficient than HTTP/1.x, allowing for multiplexing of multiple requests/responses.
    • You can potentially stop or throttle uploads if the server sends feedback early in the upload process.
    • Provides better performance for multiple simultaneous file uploads.
  • Cons:
    • Not all servers support HTTP/2 or HTTP/3 for file uploads.
    • Still needs chunked uploads to make full use of multiplexing.

Recommendation: If your server supports HTTP/2 or HTTP/3, this could be a viable option combined with chunked uploads to achieve better early feedback and upload control.

5. Alternative Protocols (e.g., FTP, SFTP, or RTSP)

While HTTP-based protocols are the most commonly used for file uploads, some legacy or specialized systems use FTP, SFTP, or even RTSP for uploading large files. These protocols often allow more granular control over the upload process, including pausing, resuming, and error handling.

  • FTP/SFTP: FTP/SFTP protocols provide more fine-grained control, including status reporting, pause/resume, and byte-level error checking. You could also implement throttling mechanisms. However, FTP/SFTP is less standardized on the web and often requires more setup.
  • RTSP (Real-Time Streaming Protocol): RTSP is used primarily for streaming media files. It provides continuous communication between client and server, similar to WebSockets, and can offer more control over the file transmission.
  • Pros:
    • FTP/SFTP can be more reliable for large file transfers with error handling and resume capabilities.
    • Can implement control features like throttling, resuming uploads, etc.
  • Cons:
    • FTP/SFTP is not as widely supported in modern web applications.
    • May not be ideal for general-purpose file uploads in web environments.

Recommendation: If you need fine-grained control and are not restricted to HTTP-based systems, FTP/SFTP might be a better fit. However, for web applications, this might not always be a practical choice.

6. Client-Side Uploading Libraries (e.g., Resumable.js, Fine Uploader)

If you’re working with JavaScript in the browser, you could look into libraries like Resumable.js, Fine Uploader, or Dropzone.js, which offer built-in support for chunked uploads, retrying failed chunks, and handling errors gracefully.

These libraries allow for seamless file upload management, with automatic chunking, real-time progress tracking, and the ability to stop the upload process if necessary. They typically work over HTTP or WebSocket protocols but offer a higher level of abstraction for error handling.

  • Pros:
    • Easy integration into web applications.
    • Handles retries, chunking, and error reporting automatically.
    • Can integrate with various backend services.
  • Cons:
    • Mostly suited for client-side (browser) applications, not for direct server-to-server transfers.

Recommendation: If you are dealing with web applications, this might be one of the easiest ways to improve the upload process.


Final Recommendations:

  1. Chunked Uploads are probably the best solution for large file uploads over HTTP. Combine this with HTTP/2 for even more control and efficiency. This is how major cloud storage services handle large uploads.
  2. WebSockets can be a great choice for real-time communication, but it requires more infrastructure.
  3. Client-Side Libraries like Resumable.js offer an easy-to-use solution for web applications.

In short, moving to a chunked upload strategy (perhaps using HTTP/2) would likely provide the best mix of control, flexibility, and error handling for large file uploads.