Mach takes the decision of which page to keep and which page to discard (move to back-end storage) in time of memory pressure, using a LRU (Least Recently Used) algorithm; but the pagers are in user-space. When a page has to be evicted from memory, Mach sends an IPC to the user-space pager associated with the page; it's up to the pager to save it somewhere. When a page fault occurs, Mach asks to the pager to bring back the page to memory, and then resumes the faulting application.
User space pagers can be used to implement different kind of backing stores: hard disks, compressed memory, a remote computer through networking, ...Several pagers can handle different parts of the address space of the same task.
The library used by ``regular'' filesystem translators (like ext2fs, iso9660fs, fatfs, ...) works by mapping in memory all the metadata, and then using only pointer indirection to access to the metadata. The GNU Mach VM handles all the caching, and a user-space pager is used to load and write pages to/from the backing store.
The problem is that, for filesystems with metadata spread all over the partition (like ext2fs), the whole partition has to be mapped into memory. That's why many diskfs translators are limited to 2GB partitions on IA-32 (and that's why fatfs isn't limited to 2GB - the FAT is located at the beginning of the partition, so the only limitation is to have the FAT be less than 2GB - I'm not sure it's even possible to have a FAT filesystem that has a 2GB FAT).
Two possible solutions for this problem are either to set-up a tree of intelligent specialized pagers, or to use a cache of mappings, creating and destroying mappings of meta-data as needed. The current implementation of ext2fs in Debian GNU/Hurd contains a patch from Ognyan to cover this problem. It implements a caching of mappings, with static mapping of fixed metadata (since in ext2, some metadata are in fixed places, and others can be anywhere in the filesystem).