From AI compliance to competitive advantageBecoming responsible by designAbout the research: We surveyed 850 C-suite executives across 17 geographies and 20 industries to understand their attitudes towards AI regulation and assess their readiness to embrace it. The following paper outlines our findings, paired with empirical learnings about how to best prepare for the regulation of AI. 2Becoming responsible by designPart 1: It pays to be responsible by design Responsible by design Being responsible by design means that organizations understand the importance of incorporating Responsible AI into their data and AI strategy from the start. They operate a responsible data and AI approach across the complete lifecycle of all of their models, enabling the organization to engender trust and scale AI with confidence.The rewards of responsibility 4Becoming responsible by designThere is clear incentive to accelerate AI transformation, but the increase in regulatory attention facing AI means that organizations need to proceed carefully. Being responsible by design can help them scale AI while better mitigating risks, meeting regulatory requirements, and creating sustainable value for themselves and their stakeholders. In a recent report, The Art of AI Maturity, Accenture identified a small group (12%) of high-performing organizations which are using AI to create differentiated growth and outcomes[i]. These “AI Achievers” are already generating 50% more revenue growth versus their peers, and they outperform other groups on customer experience (CX) and Environmental, Social and Governance (ESG) metrics[ii]. In an effort to understand what these Achievers are doing right, we built an AI model to examine their behaviors and pinpoint their key performance indicators. Among other success factors that have a combinatorial impact on business results, these Achievers are responsible by design. Achievers are, on average, up to 53% more likely than others to apply Responsible AI practices from the start, and at scale[iii]. We also learned that the share of companies’ revenue that is “AI-influenced” more than doubled between 2018 and 2021 and will likely triple by 2024[iv]. If Responsible AI is treated as an afterthought, not only are these benefits less likely, organizations could end up causing real harm to (and eroding the trust of) workers, consumers and society.The role of regulationToday, only 35% of consumers trust how organizations are implementing AI[v]. Regulation is one way to help address that trust deficit.Governments and regulators are considering how to supervise and set standards for the responsible development and use of AI. Countries such as the UK, Brazil, and China are already taking action, either by evolving existing requirements related to AI (for example, in regulation such as GDPR), or through the development of new regulatory policy. The...