Elham Tabassi and Andrew Burt on a new NIST framework that addresses risks in the design, development, use, and evaluation of AI products, services, and systems.
This week’s guests are Elham Tabassi of the National Institute of Standards and Technology (NIST) and Andrew Burt, Managing Partner of BNH.ai1, the first law firm focused on AI compliance, risk mitigation, and related topics. We discuss the new NIST framework – “AI Risk Management Framework” – intended for voluntary use to manage risks in the design, development and use of AI products and systems. To learn more, attend the free virtual workshop (Building the NIST AI Risk Management Framework) slated for March 29 – 31, 2022.
NIST has a track record for influencing how companies adopt and use technology. In the cybersecurity realm, a host of businesses and cybersecurity leaders have adopted the NIST Cybersecurity Framework and many consider it to be the gold-standard in that field. Consequently, I believe that this new NIST initiative will have a significant impact on how we manage AI risks in the future.
NIST just released another paper – “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” – that data teams will also want to consult.
Highlights in the video version:
- Introduction to Elham Tabassi and Andrew Burt
What is NIST? What is the AI Risk Management Framework Paper?
What makes a good framework?
What are practical implications of the AI Risk Management Framework Paper?
An open, collaborative, and transparent process is used to develop the paper
Who participated in the process?
Are there two to three things data teams should be doing more of today?
What are concrete things that data practitioners should be doing right away?
Are there lessons from Cybersecurity that map over to this world?
Do other countries have their own AI Risk Management Frameworks?
AI Risk Management Framework Workshop: March 29-31, 2022
- A video version of this conversation is available on our YouTube channel.
- Navigate the road to Responsible AI
- Most State-Of-The-Art AI Systems Are Trained With Extra Data
- Amit Sharma and Emre Kiciman: An open source and end-to-end library for causal inference
- Rayid Ghani and Andrew Burt: Auditing machine learning models for discrimination, bias, and other risks
- Rumman Chowdury: The State of Responsible AI
- Christopher Nguyen: What is AI Engineering?
- Nicholas Boucher: Imperceptible NLP Attacks
 Ben Lorica is an advisor to BNH.ai an other startups.