From: Marc Dionne Date: Sun, 15 May 2011 00:57:12 +0000 (-0400) Subject: Linux: fix reading files larger than the chunk size X-Git-Tag: debian/1.6.0.pre5-2~4 X-Git-Url: https://git.michaelhowe.org/gitweb/?a=commitdiff_plain;h=386abad39443c73774fe1ec9144622e0dfa52955;p=packages%2Fo%2Fopenafs.git Linux: fix reading files larger than the chunk size Commit 2571b6285d5da8ef62ab38c3a938258ddd7bac4e fixed an issue with the use of tmpfs as a disk cache and ftruncate() on files in AFS. But it introduced a problem reading larger files as reported in RT ticket 129880. What should be compared against the current cache file size is the offset into the current chunk, not the overall offset for the whole file. FIXES: 129880 Change-Id: I93008c8d0b1d70785b0b8a2e4289a04ac06cbbef --- diff --git a/src/afs/LINUX/osi_vnodeops.c b/src/afs/LINUX/osi_vnodeops.c index 45e0a356a..7c7705ec0 100644 --- a/src/afs/LINUX/osi_vnodeops.c +++ b/src/afs/LINUX/osi_vnodeops.c @@ -1427,7 +1427,7 @@ afs_linux_read_cache(struct file *cachefp, struct page *page, /* If we're trying to read a page that's past the end of the disk * cache file, then just return a zeroed page */ - if (offset >= i_size_read(cacheinode)) { + if (AFS_CHUNKOFFSET(offset) >= i_size_read(cacheinode)) { zero_user_segment(page, 0, PAGE_CACHE_SIZE); SetPageUptodate(page); if (task)