--- /dev/null
+Copyright 2000, International Business Machines Corporation and others.
+All Rights Reserved.
+
+This software has been released under the terms of the IBM Public
+License. For details, see the LICENSE file in the top-level source
+directory or online at http://www.openafs.org/dl/license10.html
+
+Locking order (in order of locking) --
+
+0.1 afs_discon_lock. Locks the current disconnected state, so it
+ can't be changed under active operations
+
+1. PVN lock in cache entry. Locks out pvn operations on vnode from
+our own layer.
+
+2. VCache entries. Multiple ones can be locked, in which case
+they're locked in order of vnode within the same volume. afs_AccessOK
+is called before locking other entries.
+
+3. VCache entry vlock (Solaris only).
+
+4. DCache entries. Tentatively, multiple ones can be locked now.
+Locking order between dcache entries is in increasing offset order.
+However, if it turns out we never need to lock multiple dcache's,
+we should just say it's not allowed, and simplify things.
+
+5. afs_xdcache. Protects the dcache hash tables and afs_index* in
+afs_dcache.c. As with afs_xvcache below, a newly created dcache
+entries can be locked while holding afs_xdcache.
+
+Bugs: afs_xvcache locked before afs_xdcache in afs_remove, afs_symlink,
+etc in the file afs_vnodeops.c
+
+5.1. unixusers. unixuser structs are locked before afs_xvcache in PSetTokens
+via afs_NotifyUser and via afs_ResetUserConns. They are also locked before
+afs_xvcache in afs_Analyze via afs_BlackListOnce.
+
+6. afs_xvcache. Must be able to load new cache entries while holding
+locks on others. Note this means you can't lock a cache entry while
+holding either of this lock, unless, as in afs_create, the cache entry
+is actually created while the afs_xvcache is held.
+
+6a. afs_disconDirtyLock. Protects the disconnected dirty and shadow
+vcache queues. Must be after afs_xvcache, because we lock this whilst
+hold xvcache in afs_create.
+
+6b. afs_xvreclaim. Protects the lookaside reclaim list. Locked inside
+xvcache in FlushReclaimedVcaches via NewVCache or the 1 min loop.
+
+7. afs_xvcb. Volume callback lock. Locked before afs_xserver in
+afs_RemoveVCB.
+
+8. afs_xvolume -- allows low-level server etc stuff to happen while
+creating a volume?
+
+9. afs_xuser -- afs_xuser is locked before afs_xserver and afs_xconn
+in PUnlog.
+
+10. afs_xcell -- afs_xcell locked before afs_xserver in afs_GetCell.
+
+11. afs_xserver -- locked before afs_xconn in afs_ResetUserConns.
+
+12. afs_xsrvAddr -- afs_xserver locked before afs_xsrvAddr in
+afs_CheckServers.
+
+13. afs_xconn -- see above
+
+14. Individual volume locks. Must be after afs_xvolume so we can
+iterate over all volumes without others being inserted/deleted. Same
+hack doesn't work for cache entry locks since we need to be able to
+lock multiple cache entries (but not multiple volumes) simultaneously.
+
+In practice this appears to only be used to protect the status, name,
+and root vnode and uniq. other users are not excluded, although
+exclusion of multiple installs of a volume entry have been poorly done.
+
+15. afs_xdnlc -- locked after afs_xvcache in afs_osidnlc.c. Shouldn't
+interact with any of the other locks.
+
+16. afs_xcbhash -- No code which holds xcbhash (all of it is in
+afs_cbqueue.c) (note: this doesn't seem to be true -- it's used
+elsewhere too) attempts to get any other locks, so it should always
+be obtained last. It is locked in afs_DequeueCallbacks which is
+called from afs_FlushVCache with afs_xvcache write-locked.
+
+17. afs_dynrootDirLock -- afs_GetDynroot returns the lock held,
+afs_PutDynroot releases it.
+
+18. Dcache entry mflock -- used to atomize accesses and updates to
+dcache mflags.
+
+19. DCache entry tlock -- used to make atomic reads or writes to
+the dcache refcount.
+
+***** RX_ENABLE_LOCKS
+
+Many fine grained locks are used by Rx on the AIX4.1 platform. These
+need to be explained.
+
+***** GLOBAL LOCKS
+
+98. afs_global_lock -- This lock provides a non-preemptive environment
+for the bulk of the AFS kernel code on platforms that require it.
+Presently this includes SunOS5 and SGI53 systems. This lock is dropped
+and reaquired frequently, especially around calls back to the OS that
+may reenter AFS such as vn_rele.
+
+ Generally, this lock should not be used to explicitly avoid locking
+data structures that need synchronization. However, much existing code
+is deficient in this regard (e.g. afs_getevent).
+
+***** OS LOCKS
+
+100. The vnode lock on SunOS and SGI53 protects the its reference count.
+
+101. NETPRI/USERPRI -- These are not really locks but provide mutual
+exclusion against packet and timer interrupts.
--- /dev/null
+Copyright 2000, International Business Machines Corporation and others.
+All Rights Reserved.
+
+This software has been released under the terms of the IBM Public
+License. For details, see the LICENSE file in the top-level source
+directory or online at http://www.openafs.org/dl/license10.html
+
+AFS file reorganization
+
+Many files in the afs and rx directories were either moved or split up to
+facilitate readability and hence maintenance. As there is no DOC directory
+as yet in RX, it is included here. Also, MakefileProto was split into
+operating system specific MakefileProto.<os> files. The common elements are
+in Makefile.common, which is included by all the MakefileProto.<os>'s.
+In addition, the subdirectory where the objects are compiled and the libraries
+are compiled have been named either "STATIC" or "MODLOAD" depending on the
+type of the client. There are no more separate NFS and no-NFS directories. The
+NFS translator specific object files all have _nfs suffixes, for example,
+afs_call_nfs.o.
+
+RX
+The rx directory now has operating system specific directories. The Unix
+operating systems use these for kernel code only. Each presently has 2 files,
+rx_kmutex.h and rx_knet.c. rx_kmutex.h contains that operating system's
+locking macros for kernel RX that were in the now removed rx_machdep.h.
+rx_knet.c contains the system specific parts from rx_kernel.c. This includes
+a separate rxk_input for each system. In the afs directory, afs_osinet.c was
+also split up. osi_NetSend was moved to these rx_knet.c directories.
+
+RX Summary:
+rx_machdep.h -> rx_lwp.h (user space parts)
+ -> <os>/rx_kmutex.h (kernel parts)
+rx_kernel.c -> <os>/rx_knet.c
+osi_NetSend -> <os>/rx_knet.c
+
+AFS
+Files in the afs directory were broken up either because of the messy #ifdef's
+or because of the size of the file, and in particular, the RCS version of
+the file. For example, RCS/afs_vnodeops,v is nearly 10 Meg. Files in the
+operating system specific directories are all prefixed with osi_ (operating
+system interface). Each must have at least an osi_groups.c and an osi_machdep.h
+file. The first implements setgroups/getgroups and the latter implements the
+kernel locking macros for AFS.
+
+
+AFS Summary:
+afs_vnodeops.c -> VNOPS/*.c (one file per class of vnode op)
+ afs_osi_pag.c
+ afs_osi_uio.c
+ <os>/osi_groups.c
+afs_cache.c -> afs_dcache.c and afs_vcache.c afs_segments.c
+afs_resource.c -> afs_analyze.c
+ afs_cell.c
+ afs_conn.c
+ afs_user.c
+ afs_server.c
+ afs_volume.c
+ afs_util.c
+ afs_init.c
+
+afs_osinet.c -> rx_knet.c (osi_NetSend)
+ afs_osi_alloc.c
+ afs_osi_sleep.c
+osi.h -> afs_osi.h
+ <os>/osi_machdep.h
+
+Several operating system interface files were moved to their appropritate
+osi directories:
+AIX: afs_aixops.c -> osi_vnodeops.c
+ afs_aix_subr.c -> osi_misc.c
+ afs_config.c -> osi_config.c osi_timeout.c
+ aix_vfs.h -> osi_vfs.h
+ misc.s -> osi_assem.s
+
+DUX: afs_vnodeops.c -> osi_vnodeops.c (DUX specific code)
+
+HPUX: afs_vnodeops.c -> osi_vnodeops.c (HPUX specific code)
+ afs_hp_debug.c -> osi_debug.c
+ hpux_proc_private.h -> osi_proc_private.h
+ hpux_vfs.h -> osi_vfs.h
+
+IRIX: afs_sgiops.c -> osi_idbg.c osi_groups.c osi_misc.c osi_vnodeops.c
+ sgi_vfs.h -> osi_vfs.h
+
+SOLARIS: afs_sun_subr.c -> osi_vnodeops.c
+ osi_prototypes.h (new header file)
+
+afs_mariner.c is centralizes the mariner code, which was plucked from both
+afs_cache.c and afs_vnodeops.c
--- /dev/null
+Copyright 2000, International Business Machines Corporation and others.
+All Rights Reserved.
+
+This software has been released under the terms of the IBM Public
+License. For details, see the LICENSE file in the top-level source
+directory or online at http://www.openafs.org/dl/license10.html
+
+Here's a quick guide to understanding the AFS 3 VM integration. This
+will help you do AFS 3 ports, since one of the trickiest parts of an
+AFS 3 port is the integration of the virtual memory system with the
+file system.
+
+The issues arise because in AFS, as in any network file system,
+changes may be made from any machine while references are being made
+to a file on your own machine. Data may be cached in your local
+machine's VM system, and when the data changes remotely, the cache
+manager must invalidate the old information in the VM system.
+
+Furthermore, in some systems, there are pages of virtual memory
+containing changes to the files that need to be written back to the
+server at some time. In these systems, it is important not to
+invalidate those pages before the data has made it to the file system.
+In addition, such systems often provide mapped file support, with read
+and write system calls affecting the same shared virtual memory as is
+used by the file should it be mapped.
+
+As you may have guessed from the above, there are two general styles
+of VM integration done in AFS 3: one for systems with limited VM
+system caching, and one for more modern systems where mapped files
+coexist with read and write system calls.
+
+For the older systems, the function osi_FlushText exists. Its goal is
+to invalidate, or try to invalidate, caches where VM pages might cache
+file information that's now obsolete. Even if the invalidation is
+impossible at the time the call is made, things should be setup so
+that the invalidation happens afterwards.
+
+I'm not going to say more about this type of system, since fewer and
+fewer exist, and since I'm low on time. If I get back to this paper
+later, I'll remove this paragraph. The rest of this note talks about
+the more modern mapped file systems.
+
+For mapped file systems, the function osi_FlushPages is called from
+various parts of the AFS cache manager. We assume that this function
+must be called without holding any vnode locks, since it may call back
+to the file system to do part of its work.
+
+The function osi_FlushPages has a relatively complex specification.
+If the file is open for writing, or if the data version of the pages
+that could be in memory (vp->mapDV) is the current data version number
+of the file, then this function has no work to do. The rationale is
+that if the file is open for writing, calling this function could
+destroy data written to the file but not flushed from the VM system to
+the cache file. If mapDV >= DataVersion, then flushing the VM
+system's pages won't change the fact that we can still only have pages
+from data version == mapDV in memory. That's because flushing all
+pages from the VM system results in a post condition that the only
+pages that might be in memory are from the current data version.
+
+If neither of the two conditions above occur, then we actually
+invalidate the pages, on a Sun by calling pvn_vptrunc. This discards
+the pages without writing any dirty pages to the cache file. We then
+set the mapDV field to the highest data version seen before we started
+the call to flush the pages. On systems that release the vnode lock
+while doing the page flush, the file's data version at the end of this
+procedure may be larger than the value we set mapDV to, but that's
+only conservative, since a new could have been created from the
+earlier version of the file.
+
+There are a few times that we must call osi_FlushPages. We should
+call it at the start of a read or open call, so that we raise mapDV to
+the current value, and get rid of any old data that might interfere
+with later reads. Raising mapDV to the current value is also
+important, since if we wrote data with mapDV < DataVersion, then a
+call to osi_FlushPages would discard this data if the pages were
+modified w/o having the file open for writing (e.g. using a mapped
+file). This is why we also call it in afs_map. We call it in
+afs_getattr, since afs_getattr is the only function guaranteed to be
+called between the time another client updates an executable, and the
+time that our own local client tries to exec this executable; if we
+fail to call osi_FlushPages here, we might use some pages from the
+previous version of the executable file.
+
+Also, note that we update mapDV after a store back to the server
+completes, if we're sure that no other versions were created during
+the file's storeback. The mapDV invariant (that no pages from earlier
+data versions exist in memory) remains true, since the only versions
+that existed between the old and new mapDV values all contained the
+same data.
+
+Finally, note a serious incompleteness in this system: we aren't
+really prepared to deal with mapped files correctly. In particular,
+there is no code to ensure that data stored in dirty VM pages ends up
+in a cache file, except as a side effect of the segmap_release call
+(on Sun 4s) that unmaps the data from the kernel map, and which,
+because of the SM_WRITE flag, also calls putpage synchronously to get
+rid of the data.
+
+This problem needs to be fixed for any system that uses mapped files
+seriously. Note that the NeXT port's generic write call uses mapped
+files, but that we've set a flag (close_flush) that ensures that all
+dirty pages get flushed after every write call. It is also something
+of a performance hit, since it would be better to write those pages to
+the cache asynchronously rather than after every write, as happens
+now.
+++ /dev/null
-Copyright 2000, International Business Machines Corporation and others.
-All Rights Reserved.
-
-This software has been released under the terms of the IBM Public
-License. For details, see the LICENSE file in the top-level source
-directory or online at http://www.openafs.org/dl/license10.html
-
-Locking order (in order of locking) --
-
-0.1 afs_discon_lock. Locks the current disconnected state, so it
- can't be changed under active operations
-
-1. PVN lock in cache entry. Locks out pvn operations on vnode from
-our own layer.
-
-2. VCache entries. Multiple ones can be locked, in which case
-they're locked in order of vnode within the same volume. afs_AccessOK
-is called before locking other entries.
-
-3. VCache entry vlock (Solaris only).
-
-4. DCache entries. Tentatively, multiple ones can be locked now.
-Locking order between dcache entries is in increasing offset order.
-However, if it turns out we never need to lock multiple dcache's,
-we should just say it's not allowed, and simplify things.
-
-5. afs_xdcache. Protects the dcache hash tables and afs_index* in
-afs_dcache.c. As with afs_xvcache below, a newly created dcache
-entries can be locked while holding afs_xdcache.
-
-Bugs: afs_xvcache locked before afs_xdcache in afs_remove, afs_symlink,
-etc in the file afs_vnodeops.c
-
-5.1. unixusers. unixuser structs are locked before afs_xvcache in PSetTokens
-via afs_NotifyUser and via afs_ResetUserConns. They are also locked before
-afs_xvcache in afs_Analyze via afs_BlackListOnce.
-
-6. afs_xvcache. Must be able to load new cache entries while holding
-locks on others. Note this means you can't lock a cache entry while
-holding either of this lock, unless, as in afs_create, the cache entry
-is actually created while the afs_xvcache is held.
-
-6a. afs_disconDirtyLock. Protects the disconnected dirty and shadow
-vcache queues. Must be after afs_xvcache, because we lock this whilst
-hold xvcache in afs_create.
-
-6b. afs_xvreclaim. Protects the lookaside reclaim list. Locked inside
-xvcache in FlushReclaimedVcaches via NewVCache or the 1 min loop.
-
-7. afs_xvcb. Volume callback lock. Locked before afs_xserver in
-afs_RemoveVCB.
-
-8. afs_xvolume -- allows low-level server etc stuff to happen while
-creating a volume?
-
-9. afs_xuser -- afs_xuser is locked before afs_xserver and afs_xconn
-in PUnlog.
-
-10. afs_xcell -- afs_xcell locked before afs_xserver in afs_GetCell.
-
-11. afs_xserver -- locked before afs_xconn in afs_ResetUserConns.
-
-12. afs_xsrvAddr -- afs_xserver locked before afs_xsrvAddr in
-afs_CheckServers.
-
-13. afs_xconn -- see above
-
-14. Individual volume locks. Must be after afs_xvolume so we can
-iterate over all volumes without others being inserted/deleted. Same
-hack doesn't work for cache entry locks since we need to be able to
-lock multiple cache entries (but not multiple volumes) simultaneously.
-
-In practice this appears to only be used to protect the status, name,
-and root vnode and uniq. other users are not excluded, although
-exclusion of multiple installs of a volume entry have been poorly done.
-
-15. afs_xdnlc -- locked after afs_xvcache in afs_osidnlc.c. Shouldn't
-interact with any of the other locks.
-
-16. afs_xcbhash -- No code which holds xcbhash (all of it is in
-afs_cbqueue.c) (note: this doesn't seem to be true -- it's used
-elsewhere too) attempts to get any other locks, so it should always
-be obtained last. It is locked in afs_DequeueCallbacks which is
-called from afs_FlushVCache with afs_xvcache write-locked.
-
-17. afs_dynrootDirLock -- afs_GetDynroot returns the lock held,
-afs_PutDynroot releases it.
-
-18. Dcache entry mflock -- used to atomize accesses and updates to
-dcache mflags.
-
-19. DCache entry tlock -- used to make atomic reads or writes to
-the dcache refcount.
-
-***** RX_ENABLE_LOCKS
-
-Many fine grained locks are used by Rx on the AIX4.1 platform. These
-need to be explained.
-
-***** GLOBAL LOCKS
-
-98. afs_global_lock -- This lock provides a non-preemptive environment
-for the bulk of the AFS kernel code on platforms that require it.
-Presently this includes SunOS5 and SGI53 systems. This lock is dropped
-and reaquired frequently, especially around calls back to the OS that
-may reenter AFS such as vn_rele.
-
- Generally, this lock should not be used to explicitly avoid locking
-data structures that need synchronization. However, much existing code
-is deficient in this regard (e.g. afs_getevent).
-
-***** OS LOCKS
-
-100. The vnode lock on SunOS and SGI53 protects the its reference count.
-
-101. NETPRI/USERPRI -- These are not really locks but provide mutual
-exclusion against packet and timer interrupts.
+++ /dev/null
-Copyright 2000, International Business Machines Corporation and others.
-All Rights Reserved.
-
-This software has been released under the terms of the IBM Public
-License. For details, see the LICENSE file in the top-level source
-directory or online at http://www.openafs.org/dl/license10.html
-
-AFS file reorganization
-
-Many files in the afs and rx directories were either moved or split up to
-facilitate readability and hence maintenance. As there is no DOC directory
-as yet in RX, it is included here. Also, MakefileProto was split into
-operating system specific MakefileProto.<os> files. The common elements are
-in Makefile.common, which is included by all the MakefileProto.<os>'s.
-In addition, the subdirectory where the objects are compiled and the libraries
-are compiled have been named either "STATIC" or "MODLOAD" depending on the
-type of the client. There are no more separate NFS and no-NFS directories. The
-NFS translator specific object files all have _nfs suffixes, for example,
-afs_call_nfs.o.
-
-RX
-The rx directory now has operating system specific directories. The Unix
-operating systems use these for kernel code only. Each presently has 2 files,
-rx_kmutex.h and rx_knet.c. rx_kmutex.h contains that operating system's
-locking macros for kernel RX that were in the now removed rx_machdep.h.
-rx_knet.c contains the system specific parts from rx_kernel.c. This includes
-a separate rxk_input for each system. In the afs directory, afs_osinet.c was
-also split up. osi_NetSend was moved to these rx_knet.c directories.
-
-RX Summary:
-rx_machdep.h -> rx_lwp.h (user space parts)
- -> <os>/rx_kmutex.h (kernel parts)
-rx_kernel.c -> <os>/rx_knet.c
-osi_NetSend -> <os>/rx_knet.c
-
-AFS
-Files in the afs directory were broken up either because of the messy #ifdef's
-or because of the size of the file, and in particular, the RCS version of
-the file. For example, RCS/afs_vnodeops,v is nearly 10 Meg. Files in the
-operating system specific directories are all prefixed with osi_ (operating
-system interface). Each must have at least an osi_groups.c and an osi_machdep.h
-file. The first implements setgroups/getgroups and the latter implements the
-kernel locking macros for AFS.
-
-
-AFS Summary:
-afs_vnodeops.c -> VNOPS/*.c (one file per class of vnode op)
- afs_osi_pag.c
- afs_osi_uio.c
- <os>/osi_groups.c
-afs_cache.c -> afs_dcache.c and afs_vcache.c afs_segments.c
-afs_resource.c -> afs_analyze.c
- afs_cell.c
- afs_conn.c
- afs_user.c
- afs_server.c
- afs_volume.c
- afs_util.c
- afs_init.c
-
-afs_osinet.c -> rx_knet.c (osi_NetSend)
- afs_osi_alloc.c
- afs_osi_sleep.c
-osi.h -> afs_osi.h
- <os>/osi_machdep.h
-
-Several operating system interface files were moved to their appropritate
-osi directories:
-AIX: afs_aixops.c -> osi_vnodeops.c
- afs_aix_subr.c -> osi_misc.c
- afs_config.c -> osi_config.c osi_timeout.c
- aix_vfs.h -> osi_vfs.h
- misc.s -> osi_assem.s
-
-DUX: afs_vnodeops.c -> osi_vnodeops.c (DUX specific code)
-
-HPUX: afs_vnodeops.c -> osi_vnodeops.c (HPUX specific code)
- afs_hp_debug.c -> osi_debug.c
- hpux_proc_private.h -> osi_proc_private.h
- hpux_vfs.h -> osi_vfs.h
-
-IRIX: afs_sgiops.c -> osi_idbg.c osi_groups.c osi_misc.c osi_vnodeops.c
- sgi_vfs.h -> osi_vfs.h
-
-SOLARIS: afs_sun_subr.c -> osi_vnodeops.c
- osi_prototypes.h (new header file)
-
-afs_mariner.c is centralizes the mariner code, which was plucked from both
-afs_cache.c and afs_vnodeops.c
+++ /dev/null
-Copyright 2000, International Business Machines Corporation and others.
-All Rights Reserved.
-
-This software has been released under the terms of the IBM Public
-License. For details, see the LICENSE file in the top-level source
-directory or online at http://www.openafs.org/dl/license10.html
-
-Here's a quick guide to understanding the AFS 3 VM integration. This
-will help you do AFS 3 ports, since one of the trickiest parts of an
-AFS 3 port is the integration of the virtual memory system with the
-file system.
-
-The issues arise because in AFS, as in any network file system,
-changes may be made from any machine while references are being made
-to a file on your own machine. Data may be cached in your local
-machine's VM system, and when the data changes remotely, the cache
-manager must invalidate the old information in the VM system.
-
-Furthermore, in some systems, there are pages of virtual memory
-containing changes to the files that need to be written back to the
-server at some time. In these systems, it is important not to
-invalidate those pages before the data has made it to the file system.
-In addition, such systems often provide mapped file support, with read
-and write system calls affecting the same shared virtual memory as is
-used by the file should it be mapped.
-
-As you may have guessed from the above, there are two general styles
-of VM integration done in AFS 3: one for systems with limited VM
-system caching, and one for more modern systems where mapped files
-coexist with read and write system calls.
-
-For the older systems, the function osi_FlushText exists. Its goal is
-to invalidate, or try to invalidate, caches where VM pages might cache
-file information that's now obsolete. Even if the invalidation is
-impossible at the time the call is made, things should be setup so
-that the invalidation happens afterwards.
-
-I'm not going to say more about this type of system, since fewer and
-fewer exist, and since I'm low on time. If I get back to this paper
-later, I'll remove this paragraph. The rest of this note talks about
-the more modern mapped file systems.
-
-For mapped file systems, the function osi_FlushPages is called from
-various parts of the AFS cache manager. We assume that this function
-must be called without holding any vnode locks, since it may call back
-to the file system to do part of its work.
-
-The function osi_FlushPages has a relatively complex specification.
-If the file is open for writing, or if the data version of the pages
-that could be in memory (vp->mapDV) is the current data version number
-of the file, then this function has no work to do. The rationale is
-that if the file is open for writing, calling this function could
-destroy data written to the file but not flushed from the VM system to
-the cache file. If mapDV >= DataVersion, then flushing the VM
-system's pages won't change the fact that we can still only have pages
-from data version == mapDV in memory. That's because flushing all
-pages from the VM system results in a post condition that the only
-pages that might be in memory are from the current data version.
-
-If neither of the two conditions above occur, then we actually
-invalidate the pages, on a Sun by calling pvn_vptrunc. This discards
-the pages without writing any dirty pages to the cache file. We then
-set the mapDV field to the highest data version seen before we started
-the call to flush the pages. On systems that release the vnode lock
-while doing the page flush, the file's data version at the end of this
-procedure may be larger than the value we set mapDV to, but that's
-only conservative, since a new could have been created from the
-earlier version of the file.
-
-There are a few times that we must call osi_FlushPages. We should
-call it at the start of a read or open call, so that we raise mapDV to
-the current value, and get rid of any old data that might interfere
-with later reads. Raising mapDV to the current value is also
-important, since if we wrote data with mapDV < DataVersion, then a
-call to osi_FlushPages would discard this data if the pages were
-modified w/o having the file open for writing (e.g. using a mapped
-file). This is why we also call it in afs_map. We call it in
-afs_getattr, since afs_getattr is the only function guaranteed to be
-called between the time another client updates an executable, and the
-time that our own local client tries to exec this executable; if we
-fail to call osi_FlushPages here, we might use some pages from the
-previous version of the executable file.
-
-Also, note that we update mapDV after a store back to the server
-completes, if we're sure that no other versions were created during
-the file's storeback. The mapDV invariant (that no pages from earlier
-data versions exist in memory) remains true, since the only versions
-that existed between the old and new mapDV values all contained the
-same data.
-
-Finally, note a serious incompleteness in this system: we aren't
-really prepared to deal with mapped files correctly. In particular,
-there is no code to ensure that data stored in dirty VM pages ends up
-in a cache file, except as a side effect of the segmap_release call
-(on Sun 4s) that unmaps the data from the kernel map, and which,
-because of the SM_WRITE flag, also calls putpage synchronously to get
-rid of the data.
-
-This problem needs to be fixed for any system that uses mapped files
-seriously. Note that the NeXT port's generic write call uses mapped
-files, but that we've set a flag (close_flush) that ensures that all
-dirty pages get flushed after every write call. It is also something
-of a performance hit, since it would be better to write those pages to
-the cache asynchronously rather than after every write, as happens
-now.