MLOps for Responsible AI: Techniques for Ensuring AI Quality
With the rapid adoption of generative AI, more and more companies are infusing AI models and services into their products. However, many of these companies are likely to lose business and valuable revenue due to their lack of investment in MLOps. Typically, organizations developing AI systems have relied on training metrics like accuracy, precision and recall, but software quality goes beyond that. Now that the barrier to entry for AI tools is smaller, we need to set quality standards, test practices, and think about AI ethics and safety. Ensuring the quality of AI goes beyond traditional metrics into attributes like usability and fairness, which need to be tested and measured using both manual exploratory and automated test strategies. Join Carlos Kidman as he covers the AI risks and biases that can happen throughout the development pipeline, demonstrate a few techniques to test a model's behaviors, security robustness, and fairness, and apply them against some real-world scenarios and state-of-the-art models. Come and learn new ideas for defining and testing responsible AI systems using quality attributes that resonate from a customer's perspective.
Carlos Kidman is a Director of Engineering at Qualiti but was formerly an Engineering Manager at Adobe. He is also an instructor at Test Automation University with courses around architecture, design, containerization, and Machine Learning. He is the founder of QA at the Point, which is the Testing and Quality Community in Utah, and does consulting, workshops, and speaking events all over the world. He streams programming and other tech topics on Twitch, has a YouTube channel, builds open-source software like Pylenium and PyClinic, and is an ML/AI practitioner. He loves fútbol, anime, gaming, and spending time with his wife and kids.