SAE International will host a two-part webinar series in May on how to build and implement gesture control in robotics, an integral step in the development of smart applications.
Each 60-minute webinar will feature a fun robotics application to demonstrate the development steps required to create the foundation designs, as well as how to augment these designs with hardware accelerators. Presentations will include opportunities for audience Q&A.
Speakers will include Mario Bergeron, machine learning specialist, Avnet, and Bryan Fletcher, product line manager, AMD. Amanda Hosey, editor, SAE Media Group, will moderate the sessions.
Part I: Training and Deploying a Classification Model, May 9 at 2:00 p.m. ET
This session will describe how to train a machine learning model for gesture recognition, using American Sign Language classification as an example. Through a series of easy-to-follow Jupyter Notebooks, attendees will learn to analyze and understand the dataset, train a classification model, and deploy it to the ZUB1CG programmable logic board with Vitis-AI framework.
Part II: Integrating a Model into ROS2, May 23 at 2:00 p.m. ET
This session will introduce the ROS2, describe how to integrate the model trained in Part I into an ROS2 graph, and explore how to control a simple robot simulation with hand gestures.
Register for the sessions on SAE International’s event webcast site.