In the previous post, we looked at different types and design objectives of high-level IRs used by ML compilers. Today, we are going to look at different optimizations that are performed using high-level IRs.These optimizations can be performed either offline or online. In online mode, the optimizations are done before performing the inference, while in offline mode, the runtime saves the optimized graph to disk. In this post, I have attempted to further extend Li et al.’s [1] classification of high-level IR optimizations into five categories to the best of my knowledge. Figure 1: Overview of Optimizations done on High-level…
Tag: Graph IR
ML Compilers Part 1: High-Level Intermediate Representation
High-level intermediate representations (IRs) play a crucial role in machine learning (ML) compilers, enabling optimization and code generation for various hardware platforms. These IRs provide an abstract representation of the machine learning models that can be transformed and optimized before being compiled into executable code. In this blog post, I will discuss the design objectives, and the type of high-level IRs used by popular, real-world ML compilers. Workflow of ML compilers Content Design Objectives Types of High-level IR Examples of High-level IR Relay (TVM) Torch.Fx Design Objectives for High-Level IR Optimizations Optimizing the input ML model to enhance its runtime…