Yu Hua

Big Memory Systems

Approx. 200 p. Sprache: Englisch.
gebunden
ISBN 9819528844
EAN 9789819528844
Veröffentlicht 11. Dezember 2025
Verlag/Hersteller Springer-Verlag GmbH
192,59 inkl. MwSt.
vorbestellbar (Versand mit Deutscher Post/DHL)
Teilen
Beschreibung

This book compiles eight groundbreaking research schemes that are shaping the future of big memory systems-massive-scale architectures where terabytes of persistent, byte-addressable memory eliminate traditional storage bottlenecks by unifying memory and storage hierarchies. For engineers and architects building next-generation databases and distributed systems, these innovations deliver transformative performance: faster persistent writes through optimized flush mechanisms, the reduced contention via lock-free designs, and unprecedented throughput using GPU acceleration. Each chapter presents fully realized architectures tested in real-world environments, from learned indexes to RDMA-powered transactions, complete with deployable code patterns and performance benchmarks. This book shows new devices are leveraging big memory to achieve microsecond-latency persistence, hardware-accelerated data processing, and linear scalability in distributed environments.
Designed for professionals with operating systems fundamentals, this book bridges cutting-edge research with practical implementation-where big memory's unique characteristics (persistence at DRAM speeds, massive capacity, and fine-grained access) demand fundamentally new architectural approaches. Learn how to achieve faster queries with memory-disaggregated learned indexes, how to optimize cuckoo hashing for persistent memory's asymmetric costs, and why the latest GPUs incorporate these persistence techniques. This book also provides efficient and useful toolkit: the RDMA protocols have been adopted in storage tiers, while the lock-free designs power real-time recommendation systems. Whether building cloud-native databases, low-latency recommendation systems, or memory-driven AI services, these solutions will help harness the full potential of the big memory.

Portrait

Dr.Yu Hua is a Professor in Huazhong University of Science and Technology. His research interests include cloud storage systems, file systems, non-volatile memory architectures, big memory, etc. His papers have been published in major conferences and journals, including OSDI, FAST, MICRO, ASPLOS, VLDB, USENIX ATC, HPCA. He is the Associate Editor in ACM Transactions on Storage (TOS) (2023-). He serves as PC (vice) Chairs in ICDCS 2021, ACM APSys 2019, and ICPADS 2016, as well as PC in OSDI, SIGCOMM, FAST, NSDI, ASPLOS, MICRO. He received the Best Paper Awards in FAST 2023, IEEE/ACM IWQoS 2023 and IEEE HPCC 2021. He is the distinguished member of CCF, and senior member of ACM and IEEE. He has been selected as the Distinguished Speaker of ACM and CCF