Packt is pleased to announce the release of Kinect in Motion – Audio and Visual Tracking by Example, a fast paced guide to building a multimodal user interface. The book is out now and available in popular print and eBook formats such as Amazon Kindle, ePub, and PDF.
About the authors:
Clemente Giorio is an independent consultant; he cooperated with Microsoft SrL for the development of a prototype that uses the Kinect sensor. He is interested in Human-computer Interface (HCI) and multimodal interaction.
Massimo Fascinari is a solution architect at Avanade, where he designs and delivers software development solutions for companies throughout the UK and Ireland.
Kinect is a motion-sensing input device by Microsoft for the Xbox 360 video game console and Windows PCs. It provides capabilities to enhance human-machine interaction along with a zero-to-hero journey to engage users in a multimodal interface dialog with their software solution. Kinect in Motion – Audio and Visual Tracking by Example will walk readers through developing more than five models which can be used to capture gestures, movements, and voice spoken commands.
Kinect in Motion – Audio and Visual Tracking by Example starts with an introduction to Kinect and its characteristics, showing readers how to master the color data stream with no more than a single page of code. The book will then lead them through lots of detailed, real-world examples, as well as how to test their own application. Finally, readers will complete their journey through the multimodal interface by combining gestures with audio.
Packt has also published this other Kinect related book in the past:
Kinect for Windows SDK Programming Guide(December 2012)
Developers who are new to Kinect and are looking to get a good grounding in video and audio tracking will benefit from this book. For more information about the book, please visit the Packt book web-page.
|Kinect in Motion – Audio and Visual Tracking by Example|
|A fast paced practical guide including examples, clear instructions, and details for building your own multimodal interface
For more information, please visit the book web-page