I graduate student the University of British Colombia. I completed my undergraduates the Netaji Subhas University of Technology (NSUT), India, with in my degree in field of Electronics and Communication Engineering (ECE). I have keen interests, in Operating systems, Program Analysis, and compilers.
Github Profile link: https://github.com/uditagarwal97

LinkedIn Profile: https://www.linkedin.com/in/udit-agarwal-517359125/
Youtube channel: https://www.youtube.com/channel/UCskdhlQx_m47xyxA0S00s0Q
Personality type: INFJ
~~ Career Timeline ~~
I will be working towards code generation in Intel's SYCL compiler.
Fault Injection for Machine
Learning Applications
Vancouver
Advisor: Karthik Pattabiraman
Specialization: Error Resilient computing
Testing

Advisor: Akash Lal.
Worked on the development and benchmarking of a language - agnostic, systematic concurrency testing framework.
Advisor: Rahul Purandare.
Worked on aiding program comprehension through the extraction of object-oriented features like class relationships from X86 executables, DLLs and representing them as UML class diagrams.
Graduated from the University of Delhi - First class with distinction 🙂
Advisor: Kunwar Singh
Successfully did my thesis defense, Yipieeee!!
Specialization: Neuromorphic CMOS-Memristor hybrid ICs.
My thesis: LINK
Advisor: Gedare Bloom
* One of the 1268 students selected by Google worldwide to work with leading open source organizations;
* Worked on Real-Time Executive for multi-processor systems (RTEMS) and Flexible I/O tester (FIO) to develop a port and use it for benchmarking various In-memory, Flash and Block filesystems supported by RTEMS.
Specialization: Electronics and Communication Engineering.
Publications
[ISSRE’22] LLTFI: Framework Agnostic Fault Injection for Machine Learning Applications
Udit Agarwal, Abraham Chan, and Karthik Pattabiraman, IEEE International Symposium on Software Reliability Engineering (ISSRE), 2022. (Acceptance Rate: 29%) [ PDF | Talk (video) ] (Code)
Abstract: As machine learning (ML) has become more prevalent across many critical domains, so has the need to understand ML applications’ resilience. While prior work like TensorFI [1], MindFI [2], and PyTorchFI [3] has focused on building ML fault injectors for specific ML frameworks, there has been little work on performing fault injection (FI) for ML applications written in multiple frameworks. We present LLTFI, a Framework-Agnostic Fault Injection tool for ML applications, allowing users to run FI experiments on ML applications at the LLVM IR level. LLTFI provides users with finer FI granularity at the level of instructions and a better understanding of how faults manifest and propagate between different ML components. We evaluate LLTFI on six ML programs and compare it with TensorFI. We found significant differences in the Silent Data Corruption (SDC) rates for similar faults between the two tools. Finally, we use LLTFI to evaluate the efficacy of selective instruction duplication – an error mitigation technique – for ML programs.
[ASE’21] Nekara: Generalized Consistency Testing
Udit Agarwal, Pantazis Deligiannis, Cheng Huang, Kumseok Jung, Akash Lal, Immad Naseer, Matthew Parkinson, Arun Thangamani, Jyothi Vedurada, Yunpeng Xiao, Proceedings of the ACM/IEEE International Conference on Automated Software Engineering (ASE), 2021. [ PDF | Talk Slides]
Abstract: Testing concurrent systems remains an uncomfortable problem for developers. The common industrial practice is to stress-test a system against large workloads, with the hope of triggering enough corner-case interleavings that reveal bugs. However, stress testing is often inefficient and its ability to get coverage of interleavings is unclear. In reaction, the research community has proposed the idea of systematic testing, where a tool takes over the scheduling of concurrent actions so that it can explore the space of interleavings.
We present an experience paper on the application of systematic testing to several case studies. We separate the algorithmic advancements in prior work (on searching the large space of interleavings) from the engineering of their tools. The latter was unsatisfactory; often the tools were limited to a small domain, hard to maintain, and hard to extend to other domains. We designed Nekara, an open-source cross-platform library for easily building custom systematic testing solutions.
We show that (1) Nekara can effectively encapsulate state-of-the-art exploration algorithms by evaluating on prior benchmarks, and (2) Nekara can be applied to a wide variety of scenarios, including existing open-source systems as well as cloud services of a major IT company. Nekara was easy to use, improved testing, and found multiple new bugs.
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on Telegram (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to print (Opens in new window)