Towards Trustworthy Deep Learning Software System


Date
Nov 17, 2023 4:00 PM — 5:00 PM
Location
Meta NYC Office
770 Broadway, New York, NY

This presentation addresses the trustworthiness of Deep Learning (DL) software by highlighting the unique challenges and distinctions between DL and traditional software. It delves into the complexities of debugging DL training programs, examining issues such as the “Untestable” nature of these programs and the practical implications of the Oracle Problem. The discussion includes property-based debugging of DL training programs, identifying common pitfalls, and detailing verification routines across pre-training, proper-fitting, and post-fitting stages. Additionally, the presentation discusses limitations in DL model testing, such as under-specification and the varied perceptions of testing value among practitioners. It emphasizes the necessity of domain-aware DL model testing methods, including invariance tests and directional expectation tests, and demonstrates their application in testing DL-powered aircraft performance models. Finally, it explores the potential of semantically-preserving data transformations and proposes that DL tests can be generated through statistical modeling.

Houssem Ben Braiek
Houssem Ben Braiek
Ph.D., M.Sc., Eng.

I am ML Tech Lead with a background in software engineering, holding M.Sc. and Ph.D. degrees from Polytechnique Montreal with distinction. My role involves supervising and guiding the development of machine learning solutions for intelligent automation systems. As an active SEMLA member, I contribute to research projects in trustworthy AI, teach advanced technical courses in SE4ML and MLOps, and organize workshops.