Purpose
The purpose of EVARA is to systematically evaluate the potential risks associated with activities, decisions, or technologies, focusing specifically on their potential to violate critical human values. Through comprehensive analysis and assessment, the tool aims to identify and prioritize ethical risks, enabling stakeholders to make informed decisions and mitigate potential harm to individuals, communities, and society as a whole. By integrating value-based considerations into risk assessment processes, we strive to promote responsible decision-making and uphold fundamental human values in various domains.
Applicability
EVARA is designed to be flexible and adaptable, making it applicable across various phases of the machine learning lifecycle. Whether in the early stages of system design, during development and training, or in the deployment and monitoring phases, our tool provides valuable insights to system developers, designers, and policymakers. EVARA is designed to be user-friendly and accessible to a wide range of stakeholders, including system developers, designers, and policymakers with varying levels of technical expertise. Its modular and customizable nature allows users to tailor the assessment process to their specific context and requirements, ensuring relevance and effectiveness across diverse applications and domains within the machine learning lifecycle. Pre-VARA
It is the responsibility of the user to conduct in-depth research on the value violation and the system being evaluated.
For more information, please read the full EVARA user guide.