One of the things we've noticed working in the Responsible AI space is a tendency for very long and complex assessments or tools that are so difficult to understand people give up trying to use them before they even get started.
We are trying to take a different approach. What would the absolute minimum look like, ethically speaking? The Ethically Aligned AI Risk Assessment MVE is our attempt to answer that question. We're hoping to get feedback and iterate. At the very least we hope our questions will spark a conversation!
Send us your feedback - firstname.lastname@example.org