Editor
348
edits
m (Added redundant_metadata=mostly recommendation for steam games.) |
|||
Line 195: | Line 195: | ||
== Sequential workloads == | == Sequential workloads == | ||
Set recordsize=1M on datasets that are subject to sequential workloads. Note that the large_blocks feature must be enabled on the pool to use larger record sizes than 128K (the default). `zfs send` operations must specify -L to ensure that larger than 128KB blocks are sent and the receiving pools must support the large_blocks feature. | Set recordsize=1M on datasets that are subject to sequential workloads. Note that the large_blocks feature must be enabled on the pool to use larger record sizes than 128K (the default). The large_blocks feature is enabled by default on new pools. `zfs send` operations must specify -L to ensure that larger than 128KB blocks are sent and the receiving pools must support the large_blocks feature. | ||
Also, set compression=lz4. As noted above under LZ4 compression, larger record sizes will increase compression ratios on compressible data by allowing the compression algorithms to process more data at a time. Furthermore, throughput is generally increased by the use of LZ4 compression. In-compressible data will be stored without compression and writes are so fast that in-compressible data is unlikely to see a performance penalty from the use of LZ4 compression. | Also, set compression=lz4. As noted above under LZ4 compression, larger record sizes will increase compression ratios on compressible data by allowing the compression algorithms to process more data at a time. Furthermore, throughput is generally increased by the use of LZ4 compression. In-compressible data will be stored without compression and writes are so fast that in-compressible data is unlikely to see a performance penalty from the use of LZ4 compression. |