From 386abad39443c73774fe1ec9144622e0dfa52955 Mon Sep 17 00:00:00 2001 From: Marc Dionne Date: Sat, 14 May 2011 20:57:12 -0400 Subject: [PATCH] Linux: fix reading files larger than the chunk size Commit 2571b6285d5da8ef62ab38c3a938258ddd7bac4e fixed an issue with the use of tmpfs as a disk cache and ftruncate() on files in AFS. But it introduced a problem reading larger files as reported in RT ticket 129880. What should be compared against the current cache file size is the offset into the current chunk, not the overall offset for the whole file. FIXES: 129880 Change-Id: I93008c8d0b1d70785b0b8a2e4289a04ac06cbbef --- src/afs/LINUX/osi_vnodeops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/afs/LINUX/osi_vnodeops.c b/src/afs/LINUX/osi_vnodeops.c index 45e0a356a..7c7705ec0 100644 --- a/src/afs/LINUX/osi_vnodeops.c +++ b/src/afs/LINUX/osi_vnodeops.c @@ -1427,7 +1427,7 @@ afs_linux_read_cache(struct file *cachefp, struct page *page, /* If we're trying to read a page that's past the end of the disk * cache file, then just return a zeroed page */ - if (offset >= i_size_read(cacheinode)) { + if (AFS_CHUNKOFFSET(offset) >= i_size_read(cacheinode)) { zero_user_segment(page, 0, PAGE_CACHE_SIZE); SetPageUptodate(page); if (task) -- 2.39.5