OpenZFS Developer Summit 2023 Talks

From OpenZFS
Jump to navigation Jump to search

Details of talks at the OpenZFS Developer Summit 2023

Introducing Fast Dedup (Allan Jude)

The performance of dedup has been a pain point in ZFS since shortly after it was introduced. For most users, the general guidance has been to avoid the feature entirely to avoid crippling the system. We will start by explaining where these pain points originate, with a focus on the issues of demand reads during write, and write amplification. We will briefly discuss the log dedup concept that was presented at OpenZFS 2017, and how it did not meet the requirements for our use case. Then we'll describe the latest version of our replacement design, and present initial performance benchmarks of the improvements it provides under some example workloads.

OpenZFS at scale: learning, challenges, and awesome customers (Sam Atkinson)

Amazon FSx for OpenZFS provides fully managed file storage built on OpenZFS, powered by the latest generations of AWS hardware and accessible via the NFS protocols. Since launch in late 2021, FSx for OpenZFS has grown to serve petabytes of data across hundreds of customers and thousands of file systems. Customers use FSx for OpenZFS to lower operational costs and increase the performance of their workloads such as Generative AI/ML, EDA, video rendering/encoding, and genomics research. These customers provision file systems that offer up to 21.5 GB/s of read throughput, up to 1 million read IOPS, and consistent sub-millisecond synchronous writes to durable storage.

In this talk, we are going to pick up where we left off last year. We will start with a quick refresher on high-level architecture, customer experience, and operations. Then, we will dig into some of the more interesting things we've done with OpenZFS this year: introducing two new and exciting hardware configurations, building a better front-end for containerized workloads (CSI Driver), and adding depth to the telemetry that customers can use to better monitor and scale their file systems. We will talk about challenges we encountered, what we learned from them, and what we contributed back. We will go in depth on a few unique OpenZFS challenges. To wrap up, we will share some of the customer use-cases on FSx for OpenZFS. We'll talk about their successes and their challenges, and then share some of the capabilities customers are still looking for. Attendees will leave this talk with a better understanding of how AWS gives customers the best of OpenZFS and how we work through customers' and our own challenges with the file system.

Idmapped Mount Support in ZFS and its Application (Youzhong Yang)

Mapping uid/gid in one namespace into another is an important and useful feature available in Linux kernel 5.12 and above. Its use-cases in portable home directory, container environment, and file ownership change without chown et al justify its support in OpenZFS.

A few iterations (corresponding to upstream changes) have been done to support idmapped mount in ZFS. Implementation details and caveats will be discussed in this presentation. We will also propose adding the missing functionality - how to mount the ZFS dataset with the provided idmapping information.

We would like to share the success story of implementing these changes and how it resulted in improving productivity at our company.

Shared Log Pool (Paul Dagnelie)

The ZIL is ZFS's mechanism for handling synchronous operations efficiently and quickly. SLOG devices provide the persistent storage for the ZIL to use. But SLOG devices are dedicated to a single pool, which can cause administrative friction and cost valuable resources when there are multiple storage pools on a single system. The introduction of the Shared Log Pool is intended to help solve that problem. By sharing the SLOG with multiple pools, space can be used more efficiently and dynamic management of pools can be made easier.

The talk will go into the context for the problem, provide a brief overview of the ZIL and SLOG devices, and discuss the design and technical constraints for the new architecture. It will also discuss testing and performance results, and possible future enhancements for the feature.

RAIDZ Expansion (Matt Ahrens & Don Brady)

RAIDZ provides storage redundancy through parity, allowing any 1, 2, or 3 disks to be lost without losing data. To increase the storage in a RAIDZ pool, a new group of disks must be added, e.g. 5+ new disks must be added at a time. This works well in enterprise settings, where storage is typically added by installing a new “shelf” holding dozens of disks. However, in smaller installations used in homes and small businesses, this is economically infeasible.

The RAIDZ Expansion project enables adding disks to RAIDZ storage pools one at a time. After many years of development, in a collaborative effort, this project is finally nearing integration. This talk will cover the internal details of how RAIDZ Expansion works, and the implications of the design on data redundancy, space usage, and performance. We will conclude with a practical report containing a demo, performance results, and project status.

Z.I.A. Accelerates ZFS Compression, Checksumming, and RAIDZ (Jason Lee)

ZFS provides many powerful built-in features such as compression, checksumming, and erasure coding of the data stored in it. However, currently these features are implemented in software for running on general purpose processors, which may not be as performant as possible in some cases. The ZFS Interface for Accelerators (Z.I.A.) provides a generic interface that allows for data to be moved out of ZFS for processing elsewhere, such as dedicated hardware accelerators for more performant implementations of the same algorithms. With Z.I.A., we have seen speedups of 16x, allowing us to get north of 90% of the available bandwidth available in ZFS running on NVMe SSDs.

The Z.I.A. pull request has been open on GitHub for quite some time now. This presentation is intended to attract some attention towards it as well as provide details about the implementation of Z.I.A.. There are some additional changes to Z.I.A. that the community may be interested in.