Compact Vitis AIFreiburg / Stuttgart / Berlin
This course presents the Vitis AI development Toolkit for the AI inference on Xilinx Hardware platforms in conjunction with DNN algorithms, model training, associated frameworks for development and deploying it on Alveo™ cards, Zynq SoCs, Zynq® UltraScale+™ MPSoCs.
The basics of Machine Learning (ML) approach and challenges in Neural Network training are revisited as common introduction. Having this the support for mainstream frameworks like Caffe and Tensorflow in the Vitis Toolchain is explained. The models for diverse deep learning tasks are discussed along with these frameworks and for further insight, a comprehensive pre-optimized model library is available offering models that can readily be deployed on Xilinx devices. The concepts of Vitis AI development kit are shown with the Framework specific tools to prune and optimize the trained models to have a properly scaled base for mapping.
From this output the AI Compiler generates deployable code that can then be run on a specific FPGA fabric microarchitecture, the DPU, which is introduced with its parametric features as FPGA IP. It will become clear how Vitis AI development kit tools can be used for analyzing the model performance and debugging. To complete the full deployment, the final chapters present Vitis AI libraries and APIs and show how to integrate it with DPU for optimized inference.
The course is focused on:
- Illustrating the Vitis AI tool flow
- Utilizing the architectural features of the Deep Learning Processor Unit (DPU)
- Optimizing a model using the AI quantizer and AI compiler
- Utilizing the Vitis AI Library to optimize pre-processing and post-processing functions
- Creating a custom platform and application
- Deploying a design
Xilinx Alveo accelerator cards
Xilinx SoC & MPSoC und ACAP
Basic knowledge of machine learning concepts
Comfort with the C/C++/Python programming language
Software development flow