This layer buffers I/O data to minimize disc and network accesses. Depending on the application, the size of this buffer can vary from a few hundred kilobytes to several megabytes. The advantage of the cache is that data requested by read operations frequently can be found in the cache, while write operations can be delayed until closing of the file, depending on the page size and total cache sizes and on the page replacement algorithm. The caching layer is designed in such a way that the page-replacement algorithm can be substituted with a different one like LRU (Least Recently Used) or random replacement of memory pages [Henn90]. Currently, an algorithm implementing a combination of these two methods is used. The memory pages are usually as big as or - for better performance - even bigger than operating system cache pages. This layer also allocates file space for all types of objects. To optimize cache hits, all small objects with a few allocation units in size are contiguously stored in one big chunk whereas large data pieces are always appended at the current end of the file. Since the above layers need the functionality to update data items, a free() operation is also implemented so that no space on the physical file is permanently wasted. From the above layers, the caching layer can be seen as a big malloc()/free() library with access functions that perform cached file access.