DOKK Library

FH AI Risk Taxonomy

Authors ForHumanity Inc.

License CC-BY-NC-ND-4.0

Plaintext
FH AI Risk Taxonomy



Enterprise Risk (Business model / impact context)

This level is representative of the overarching strategic risk environment for an organisation.
We would foresee risks being divided up between the following kinds of areas.

If this does not fit the executive risk profile of a target organisation, it can be amended as
much as desired, but organisations need to ensure there is a mapping to operational risk
types within each operational vertical - see the Risk Taxonomy below this.

As long as mappings are attached to new top level categories, it should not impact the
taxonomy if changed.

    1.   Economic
    2.   Political
    3.   Social
    4.   Technology
    5.   Legal & Regulatory
    6.   Environmental
    7.   People
    8.   Third party


AI, algorithmic and autonomous systems (AAA) Risk Taxonomy

This level is to enable connection into the top level Enterprise Risk Management strategic risk
landscape. The risk categories listed below are typical for larger companies. They are likely
the same across all operational areas. These are scaled based upon metrics for more specific
contributory risk categories under management.

This contributes context for operational risk management, but we would recommend, for the
purpose of human-centric risk management those risk categories are either linked to
preferred AI principles, or replaced by AI principles to roll up to a more meaningful measure
of socio-technical AI related risk to pass up for reporting to the board.


 Traditional Risk Categories                AI Principles                       FH AI Ethics Principles

These are typical organisational           One generally well accepted          Principles that are applied
risk categories, which cannot              list of ethics-focused               within FH for minimizing
cater for the full socio-technical         principles is included               downside risks to humans.
risk picture.                              below. It can be mapped to           Augments to the
                                           AI control capabilities /            commonly accepted AI
These are likely static across             domains and impact types.            Ethics principles like HLEG
specialist operational risk areas          Then to criteria required to
e.g. Strategic / Financial IT risk,        effectively manage

This document is the property of ForHumanity Inc. ©2022, ForHumanity Inc. a 501(c)(3) tax-exempt Public Charity
Page 1

All rights reserved. Creative Commons CC-BY-NC-ND Attribution-NonCommerical-NoDerivs
FH AI Risk Taxonomy



Strategic / Financial Marketing            socio-technical systems
risk.                                      risk.

Categories may usefully map to             As noted above, we would
control domains/systems, but               recommend that these are
careful consideration should be            mapped to or replace the
given to moving towards a more             more traditional risk
human-centric list                         categories.

Requires evolution to incorporate          OECD and UNICEF are just
risk categories associated with            two other organisations that
impacts to individuals, society,           have outlined such
and the environment as opposed             principles. Most have at
to typical historical focus: the           least some reference back
organisation                               to the EU AI High Level
                                           Expert Group list below.

    1.   Strategic                              •   Human agency and                ●    Human Centric
    2.   Financial                                  oversight                       ●    Ethical
    3.   Reputational                           •   Technical robustness            ●    Fair
    4.   Operational                                and safety                      ●    Actionable
    5.   ESG                                    •   Privacy and data                ●    Operational
    6.   Business                                   governance                      ●    Accountable
                                                •   Transparency                    ●    Auditable
                                                •   Diversity,                      ●    Certain
  Risk universe in AI, algorithm
  and/ or autonomous systems                        Non-discrimination,             ●    Transparent
  that has a potential to impact                    and fairness
 people, people groups, society,                •   Societal and
        and environment.                            environmental well
                                                    being
                                                •   Accountability


                                           Source: High level expert
                                           group


 AI Risk Categories (socio-technical)

 Replicating structure in standards such as ISO27002. Identifying socio-technical risk
 categories that permit useful grouping of new AI related risks to individuals, society, and the
 environment.

 In addition, these risk categories support in creating appropriate control capability that were
 required, but frequently missing from pre-existing risk models. For instance, diversity and
 accessibility.


This document is the property of ForHumanity Inc. ©2022, ForHumanity Inc. a 501(c)(3) tax-exempt Public Charity
Page 2

All rights reserved. Creative Commons CC-BY-NC-ND Attribution-NonCommerical-NoDerivs
FH AI Risk Taxonomy




 These sit at a layer above the typical control capabilities for specialist disciplines such as
 security or privacy. This is recognising that AI governance and risk management brings
 together many such pre-existing expertise, each with their own subsets of specialized skills
 and controls.

 Control capabilities represent reasonably independent subsets of control that have already
 been identified as necessary to avoid exposure to one or more adverse outcomes that might
 result from use of AI, ML, Autonomous systems.

 An iterative process of feedback will enable change to the list, where risks or incidents do
 not usefully map to a pre-existing control domain for mitigation, or criteria do not usefully
 map up to control domains on a one to one, one to many, or many to many basis for
 reporting.

Level 1 (Risk Categories)             Level 2 (Activities/ measures)         Level 3 (root causes)
   1. Privacy                         Accuracy                               Data Quality
   2. Security                        Validity                               Information Quality
   3. Safety                          Reliability                            Pipeline and Infra quality
   4. Bias                            Robustness                             Model quality
   5. Governance                      Resilience                             Policy or process
                                      Interpretability                       Training and communication
   6. Ethics capability
                                      Performance                            Data ethics
   7. Transparency                    Ethics Assessment
   8. Explainability
   9. Accountability
   10. Accessibility
   11. Diversity
   12. Human agency
   13. Sustainability




Risk Impact Types
    • Environmental impact
          • Preserve, Conserve, Limit and Enhance/ enrich – Environment (Air, Forest,
              Water, Animal)
                  • Global warming
                  • Curtailing climate crisis response
                  • Deforestation
                  • Animal Extinction
                  • Water, Soil and Air Quality
                  • Overcrowding/ Gravitational pull of urban centers
                  • Extreme weather
    • Societal impact
          • Impact to democracy

This document is the property of ForHumanity Inc. ©2022, ForHumanity Inc. a 501(c)(3) tax-exempt Public Charity
Page 3

All rights reserved. Creative Commons CC-BY-NC-ND Attribution-NonCommerical-NoDerivs
FH AI Risk Taxonomy



           • Impact to rights and freedom
           • Impact on social policies
           • Impact on behaviours/ beliefs
   • Impact to individuals/ groups (subset of larger scale)
           • Life impact
           • Physical, Mental and Psychological impact
           • Damage to reputation and/ or identity
           • Privacy exposure and associated harassment
           • Resource or Monetary or time loss
           • Limits to access or opportunity
           • Petty disturbance
Writing a good risk statement
[Adverse Outcome/s that has an effect on people / society / the environment] caused by
[missing controls, insufficient control/s] compromised by [inside or outside threat actor/s, or
harmful when operating as expected], that may result in [impacts/s]




How to enhance the risk taxonomy:
    1. Link each adverse outcome to the control domains (preferably level 1), impact types,
       principles and risk categories. If there are any adverse outcomes that do not fit with
       any of the control domains or impact types or risk categories or vice versa, please do
       notify to enable updating the taxonomy.


Next steps

    1. Create a table to populate with existing lists of risks or control failings, with drop
       down lists to add taxonomy labels.
    2. To map each FH audit criteria to one or more of the risk categories/control domains.
       Potentially using the pre-existing FH pillars and just adding other applicable
       capabilities / domains. Noting exceptions where there is a missing or poor fit.
       Enabling different audiences to filter different ways
    3. Attempt risk statement for each listed risk to link the control failing or potential
       adverse outcome to the contributory components. This will give us means to
       communicate any gaps.
    4. When logging and labelling risks / adverse outcomes, or matching FH criteria to risk
       categories/control domains, note any that don’t fit well or have no fit and flag to
       potentially revised taxonomy.




This document is the property of ForHumanity Inc. ©2022, ForHumanity Inc. a 501(c)(3) tax-exempt Public Charity
Page 4

All rights reserved. Creative Commons CC-BY-NC-ND Attribution-NonCommerical-NoDerivs