Debiasing AI Using Amazon SageMaker
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 1h 42m | 270 MB
Instructor: Kesha Williams
Artificial intelligence (AI) can have deeply embedded bias. It's the job of data scientists and developers to ensure their algorithms are fair, transparent, and explainable. This responsibility is critically important when building models that may determine policy-or shape the course of people's lives. In this course, award-winning software engineer Kesha Williams explains how to debias AI with Amazon SageMaker. She shows how to use SageMaker to create a predictive-policing machine-learning model that integrates Rekognition and AWS DeepLens, creating a crime-fighting model that can "see" what's happening in a live scene. By following the development process, you can learn what goes into making a model that doesn't suffer from cultural prejudices. Kesha also discusses how to remove bias in training data, test a model for fairness, and build trust in AI by making models that are explainable.
Topics include:
Reviewing the crime-fighting case study
Amazon SageMaker basics
Preparing the data
Training the model
Evaluating the model
Deploying a face-detection model to AWS DeepLens
Retrieving data for the model with AWS Rekognition
Sending data points to a SageMaker hosted model
Retrieving predictions
Making your models explainable
Download link:
Só visivel para registados e com resposta ao tópico.Only visible to registered and with a reply to the topic.Links are Interchangeable - No Password - Single Extraction