Commit
2571b6285d5da8ef62ab38c3a938258ddd7bac4e fixed an issue with
the use of tmpfs as a disk cache and ftruncate() on files in AFS.
But it introduced a problem reading larger files as reported in
RT ticket 129880.
What should be compared against the current cache file size is the
offset into the current chunk, not the overall offset for the whole
file.
FIXES: 129880
Reviewed-on: http://gerrit.openafs.org/4656
Reviewed-by: Russ Allbery <rra@stanford.edu>
Tested-by: Russ Allbery <rra@stanford.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Derrick Brashear <shadow@dementia.org>
(cherry picked from commit
8ee33373c1ef24572476d8189a3f6f7505bfc83a)
Change-Id: I0349d744a9e16b6448a621fe6f4078b1eb1fa9d2
Reviewed-on: http://gerrit.openafs.org/4664
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Derrick Brashear <shadow@dementia.org>
/* If we're trying to read a page that's past the end of the disk
* cache file, then just return a zeroed page */
- if (offset >= i_size_read(cacheinode)) {
+ if (AFS_CHUNKOFFSET(offset) >= i_size_read(cacheinode)) {
zero_user_segment(page, 0, PAGE_CACHE_SIZE);
SetPageUptodate(page);
if (task)