The concept and criteria of the AIWS Ethics and Practices Index

The AIWS Ethics and Practices Index of the Michael Dukakis Institute measures the extent to which a government in its AI activities respects human values and contributes to the constructive use of AI. This Index has four categories.

The Index has four categories:

  1. Transparency: Substantially promotes and applies openness and transparency in the use and development of AI, including data sets, algorithms, intended impacts, goals, purposes.
  2. Regulation: Has laws and regulations that require government agencies to use AI responsibly; that are aimed at requiring private parties to use AI humanely and that restricts their ability to engage in harmful AI practices; and that prohibit the use of AI by government to disadvantage political opponents.
  3. Promotion: Invests substantially in AI initiatives that promote shared human values; refrains from investing in harmful uses of AI (e.g., autonomous weapons, propaganda creation and dissemination).
  4. Implementation: How governments seriously execute their regulations, law in AI toward good things. Respects and commits to widely accepted principles, rules of international law.

Methodology: Governments will be assessed in each category by the standards of the moment. AI is in an early stage, and governments are only beginning to address the issue through, for example, laws and regulations. Later on, as governments have more time to assess the implications of AI, more substantial efforts will be expected—for example, a more fully articulated set of AI-related laws and regulations.

The Index also points out some criteria for evaluation and control of ethics in AI:

  • Data sets: how to collect, where, whom, for what, by what. Data sets using for AI require accuracy, validation and transparency
  • Algorithm: transparency, fairness, non-bias
  • Intended impacts: for what, for whom, goals and purpose
  • Transparency in national resources
  • Refrains from investing in harmful uses of AI
  • Responsibility for mistakes
  • Transparency in decision making
  • Avoiding bias
  • Core ethical values
  • Data protection and IP
  • Mitigating social dislocation
  • Cybersecurity