Artificial Intelligence Risk Management
Artificial Intelligence risk attributes provide a deeper assessment of the risks related to AI-generated content in Cloud Services by capturing Large Language Model (LLM) details for AI categories on the Cloud Registry.
Artificial Intelligence Risk Attributes
The Artificial Intelligence risk score is calculated out of the following categories, attributes, and values defined by Skyhigh CASB.
Category | Attribute | Description | Possible Value |
---|---|---|---|
AI Security | LLM Supported | Does the service offer LLMs (Large Language Models) as part of its service offering? | 10 - No 50 - Not publicly known 80 - Yes |
AI Security | Jailbreak | Jailbreak is the degree to which a model can be manipulated to generate content misaligned with its intended purpose. | 80 - High Risk 40 - Medium Risk 10 - Low Risk 50 - Not Publicly Known 0 - NA |
AI Security | Toxicity | Toxicity is the degree to which a model generates toxic or harmful content like threats and hate speech. | 80 - High Risk 40 - Medium Risk 10 - Low Risk 50 - Not Publicly Known 0 - NA |
AI Security | Bias | Bias is the degree to which a model generates biased or unfair content that could be introduced due to training data. | 80 - High Risk 40 - Medium Risk 10 - Low Risk 50 - Not Publicly Known 0 - NA |
AI Security | Malware | Malware is the degree to which a model can be manipulated to generate malware or known malware signatures. | 80 - High Risk 40 - Medium Risk 10 - Low Risk 50 - Not Publicly Known 0 - NA |
NOTES:
- LLM risk attributes are zero-weighted and not part of Skyhigh's default risk scoring. However, you can override the risk scores on the Risk Management. For details about editing the risk category weights, see Edit Global Risk Weighting.
- To restore default risk attributes, select Skyhigh Default, and then click Restore on the Risk Management (found under Governance > Risk Management).