ResponseLazy: Work in buffers, not bytes#104
Conversation
|
I skimmed through the changes, this does seem like a good change. In any case, it is indeed breaking, so I'm marking this for 3.0. One thing though: I'd prefer the examples to have the urls in them instead of taking them as an argument, because the examples will be shorter that way, and they give an example of an url (it sets a good precedent of including the protocol part, which will ideally cause less friction for someone learning the library). The faked content-length header was motivated by something, I don't remember what, but I do agree that removing it is good. |
|
Good point! I put the URLs back in the examples. |
When reading a response lazily, we want to work with it efficiently, as buffers of bytes coming out of the OS TCP stack. While we could `Iterator::collect::<Vec<u8>>()` the bytes back together from the current interface, this incurs an extra copy. (And unless optimizations can miraculously burn through all our iteration logic, we'll get an inefficient byte-by-byte copy at that!) Instead, have ResponseLazy just implement Read. Users who want individual bytes can get them with std::io::Bytes; that's what it's for. While we're at it, don't lie about content-length headers that weren't actually there for a chunked transfer.
|
Been a while, sorry about that! I'm now working on 3.0 however, so I've started merging theses PRs. GitHub didn't really give me an option to fix conflicts and merge to another branch, so I've merged this manually as e7aae8c. Thank you for the PR! |
When reading a response lazily, we want to work with it efficiently, as buffers of bytes coming out of the OS TCP stack. While we could
Iterator::collect::<Vec<u8>>()the bytes back together from the current interface, this incurs an extra copy. (And unless optimizations can miraculously burn through all our iteration logic, we'll get an inefficient byte-by-byte copy at that!)Instead, have ResponseLazy just implement
Read.Users who want individual bytes can get them with
std::io::Bytes; that's what it's for.While we're at it, don't lie about content-length headers that weren't actually there for a chunked transfer.
I appreciate this is an API break, but handling large downloads efficiently (and uploads for that matter; I'm thinking on a good way do that) seems like a common use case.