A Hand‑on Guide on Testing Deep Neural Networks


Date
May 18, 2022 2:30 PM — 3:00 PM
Location
Oakland Center
312 Meadow Brook Road, Rochester, Michigan

Data-driven AI (e.g., deep learning) has become a driving force and has been applied in many applications across diverse domains. The human-competitive performance makes them stand as core components in complicated software systems for tasks, e.g., computer vision (CV) and natural language processing (NLP). Corresponding to the increasing popularity of deploying more powerful and complicated DL models, there is also a pressing need to ensure the quality and reliability of these AI systems. However, the data-driven paradigm and black-box nature make such AI software fundamentally different from classical software. To this end, new software quality assurance techniques for AI-driven systems are thus challenging and needed. In this tutorial, we introduce the recent progress in AI Quality Assurance, especially for testing techniques for DNNs and provide hands-on experience. We will first give the details and discuss the difference between testing for traditional software and AI software. Then, we will provide hands-on tutorials on testing techniques for feed-forward neural networks (FNNs) with a CV use case and recurrent neural networks (RNNs) with an NLP use case. Finally, we will discuss with the audience the success and failures in achieving the full potential of testing AI software as well as possible improvements and research directions.

Houssem Ben Braiek
Houssem Ben Braiek
Ph.D., M.Sc., Eng.

I am ML Tech Lead with a background in software engineering, holding M.Sc. and Ph.D. degrees from Polytechnique Montreal with distinction. My role involves supervising and guiding the development of machine learning solutions for intelligent automation systems. As an active SEMLA member, I contribute to research projects in trustworthy AI, teach advanced technical courses in SE4ML and MLOps, and organize workshops.