Broadly speaking, in mathematics, sparsity generally refers to the property of having a relatively small number of non-zero elements or structures within a larger space or set. This concept can be applied to various mathematical objects, such as matrices, graphs, functions, and more. Sparsity is often exploited to optimize efficiency. For instance, in linear algebra, sparse matrices (matrices with a large number of zero elements) are handled differently from dense matrices, leveraging the abundance of zeros to save memory and accelerate computations. In graph theory, sparse graphs (graphs with relatively few edges) can be processed more efficiently with different algorithms.…
Category: LLVM
ML Compilers Part 2: An Overview of Graph Optimizations
In the previous post, we looked at different types and design objectives of high-level IRs used by ML compilers. Today, we are going to look at different optimizations that are performed using high-level IRs.These optimizations can be performed either offline or online. In online mode, the optimizations are done before performing the inference, while in offline mode, the runtime saves the optimized graph to disk. In this post, I have attempted to further extend Li et al.’s [1] classification of high-level IR optimizations into five categories to the best of my knowledge. Figure 1: Overview of Optimizations done on High-level…
ML Compilers Part 1: High-Level Intermediate Representation
High-level intermediate representations (IRs) play a crucial role in machine learning (ML) compilers, enabling optimization and code generation for various hardware platforms. These IRs provide an abstract representation of the machine learning models that can be transformed and optimized before being compiled into executable code. In this blog post, I will discuss the design objectives, and the type of high-level IRs used by popular, real-world ML compilers. Workflow of ML compilers Content Design Objectives Types of High-level IR Examples of High-level IR Relay (TVM) Torch.Fx Design Objectives for High-Level IR Optimizations Optimizing the input ML model to enhance its runtime…
Writing Your First LLVM Pass and Registering it in the Clang Toolchain
There are several detailed tutorials[1] on writing an LLVM pass so that I won’t cover it in much detail. However, as of today(May 2020), there is no detailed guide on registering an LLVM pass within the OPT and Clang toolchains. So, this post will be mainly regarding that. Content: Introduction Types of LLVM passes Writing a basic function pass in LLVM Registering a pass within the OPT toolchain Clang toolchain Introduction LLVM is an extremely modular compiler infrastructure that provides back-end tools for code optimization and transformation. It works on an intermediate representation called LLVM IR. Clang, on the other…