Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Mastering Graphics Programming with Vulkan

You're reading from  Mastering Graphics Programming with Vulkan

Product type Book
Published in Feb 2023
Publisher Packt
ISBN-13 9781803244792
Pages 382 pages
Edition 1st Edition
Languages
Authors (2):
Marco Castorina Marco Castorina
Profile icon Marco Castorina
Gabriel Sassone Gabriel Sassone
Profile icon Gabriel Sassone
View More author details

Table of Contents (21) Chapters

Preface 1. Part 1: Foundations of a Modern Rendering Engine
2. Chapter 1: Introducing the Raptor Engine and Hydra 3. Chapter 2: Improving Resources Management 4. Chapter 3: Unlocking Multi-Threading 5. Chapter 4: Implementing a Frame Graph 6. Chapter 5: Unlocking Async Compute 7. Part 2: GPU-Driven Rendering
8. Chapter 6: GPU-Driven Rendering 9. Chapter 7: Rendering Many Lights with Clustered Deferred Rendering 10. Chapter 8: Adding Shadows Using Mesh Shaders 11. Chapter 9: Implementing Variable Rate Shading 12. Chapter 10: Adding Volumetric Fog 13. Part 3: Advanced Rendering Techniques
14. Chapter 11: Temporal Anti-Aliasing 15. Chapter 12: Getting Started with Ray Tracing 16. Chapter 13: Revisiting Shadows with Ray Tracing 17. Chapter 14: Adding Dynamic Diffuse Global Illumination with Ray Tracing 18. Chapter 15: Adding Reflections with Ray Tracing 19. Index 20. Other Books You May Enjoy

Implementing cloth simulation using async compute

In this section, we are going to implement a simple cloth simulation on the GPU as an example use case of a compute workload. We start by explaining why running some tasks on the GPU might be beneficial. Next, we provide an overview of compute shaders. Finally, we show how to port code from the CPU to the GPU and highlight some of the differences between the two platforms.

Benefits of using compute shaders

In the past, physics simulations mainly ran on the CPU. GPUs only had enough compute capacity for graphics work, and most stages in the pipeline were implemented by dedicated hardware blocks that could only perform one task. As GPUs evolved, pipeline stages moved to generic compute blocks that could perform different tasks.

This increase both in flexibility and compute capacity has allowed engine developers to move some workloads on the GPU. Aside from raw performance, running some computations on the GPU avoids expensive...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}