Authors:
Savita Arya
1
;
Bharadwaja K
2
;
Lavanya Addepalli
3
;
Vidya Sagar S D
4
;
Jaime Lloret
3
and
Bhavsingh Maloth
5
Affiliations:
1
St. Joseph's Degree and PG College, Hyderabad, India
;
2
St. Ann’s College for Women, Hyderabad, India
;
3
Universitat Politècnica de València, Spain
;
4
NITTE Meenakshi Institute of Technology, Bangalore, India
;
5
Ashoka Women’s Engineering College, Karnool, India
Keyword(s):
Corporate Governance-Aware AI, Ethical Decision-Making, Trust Modelling, Explainable AI, Autonomous Systems, Responsible AI Deployment
Abstract:
Organizations rapidly adopting autonomous decision-making systems, ethical alignment with stakeholders, the building of trust and compliance through governance has become an increasingly important challenge. In this paper we proffer the GRAID Framework (Governance-Risk-Aligned Intelligent Decisioning), a novel, multi-layered architecture which inter alia, brings together the AI decision making with rules of form governance, ethical constraints and trust modelling. A multi objective loss function is proposed, that penalizes ethical violations, governance risks, and trust deviations in real time and combined with a proposed constraint‐aware neural learning that will be careful of constraint violations. To validate the framework, we developed a comprehensive synthetic dataset that simulates enterprise decisions in the HR, finance and procurement domains. Results from experiments show while GRAID has similarly competitive accuracy (81.5%), it outperforms base models, i.e. Logistic Regres
sion and Decision Trees in ethical compliance (+19.7%), governance risk reduction (−59.1%) and stakeholder trust (+36.9%). The results of these findings show that GRAID is a strong and ethical AI solution for enterprise level autonomy in regulated environments.
(More)