A couple of days ago somebody on the Message Board asked an interesting question about how to provide resumable HTTP downloads. My first response to this question was that this isn't possible since HTTP is a stateless protocol that has no concept of file pointers and thus can't resume an HTTP download.

However it turns out HTTP 1.1 does have the ability to specify ranges in downloads by using the Range: header in the Http header sent form the client. You can do things like:

Range: 0-10000

Range: 100000-

Range: -100000

which download the first 100000 bytes, everything over 100000 bytes or the last 100000 bytes. There are more combinations but the first two are the ones that are of interest for a resumable download.

To demonstrate this feature I used wwHTTP (in West Wind Web Connection/VFP) to download a first 400k chunk of a file into a file with HTTPGetEx which is meant to simulate an aborted download. Next I do a second request to pick up the existing file and download the remainder:

#INCLUDE wconnect.h
CLEAR
CLOSE DATA
DO WCONNECT

LOCAL o as wwHTTP
lcDownloadedFile = "d:\temp\wwipstuff.zip"

*** Simulate partial output
lcOutput = ""
Text=""
tnSize = 0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"Range: bytes=0-400000"+CRLF,lcDownloadedFile)
o.Httpclose()

lcOutput = Text
? LEN(lcOutput)

*** Figure out how much we downloaded
lnOpenAt = FILESIZE(lcDownloadedFile)

*** Do a partial download starting at this byte count
Text=""
tnSize =0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"Range: bytes=" + TRANSFORM(lnOpenAt) + "-" + CRLF)
o.Httpclose()

? LEN(Text)
*** Read the existing partial download and append current download
lcOutput = FILETOSTR(lcDownloadedFile) + TEXT
? LEN(lcOutput)

STRTOFILE(lcOutput,lcDownloadedFile)

RETURN
 
Note that this approach uses a file on disk, so you have to use HTTPGetEx (with West Wind Web Connection). The second download can also be done to disk if you choose, but things will get tricky if you have multiple aborts and you need to piece them together. In that case you might want to try to keep track of each file and add a number to it, then combine the result at the very end.
 
If you download to memory using WinInet (which is what wwHTTP uses behind the scenes) you can also try to peel out the file from the Temporary Internet Files cache. Although this works I suspect this process will become very convoluted quickly so if you plan on providing the ability to resume I would highly recommend that you write your output to file yourself using the approach above.
 
Some additional information on WinInet and some of the requirements for this approach to work with it are described here: http://www.clevercomponents.com/articles/article015/resuming.asp.
 
The same can be done with wwHTTP for .Net by adding the Range header to the wwHTTP:WebRequest.Headers object.