How stakeholder capitalism and the ethics of AI go hand in hand
Elevate your technology and enterprise data strategy to Transform 2021.
AT meeting 2020 of the World Economic Forum in Davos, Salesforce founder Marc Benioff said “capitalism as we have known it is dead”. In its place is now stakeholder capitalism, a form of capitalism that has been spearheaded by Klaus Schwab, founder of the World Economic Forum, for the past 50 years. As Benioff said, stakeholder capitalism is “a fairer, more just, more equitable and more sustainable way of doing business that values all stakeholders, as well as all shareholders.”
Unlike shareholder capitalism, which is measured primarily by the monetary profit generated for the sole shareholders of a company, stakeholder capitalism requires that business activity benefits all stakeholders associated with the company. These stakeholders can include shareholders, employees, customers, the local community, the environment, etc. For example, Benioff’s approach includes homeless people in San Francisco as Salesforce stakeholders.
While supporters of stakeholder capitalism have been working on the idea for some time now, an important milestone was taken in early 2021. Following discussions at the 2020 meeting chaired by Bank of America CEO Brian Moynihan, a formal set of ESG (environmental, social, and corporate governance) measures have been ad that the company can report, indexed around 4 pillars:
- Principles of governance
These metrics are important because they make it easy to audit a company’s compliance with the principles of stakeholder capitalism.
Given the role that technology has within businesses, it is impossible to ignore the growing impact that artificial intelligence will have in society and the parallels to the discussion of stakeholder capitalism. Many companies are moving from a goal of pure profit to more inclusive and responsible goals of stakeholder capitalism. In AI, we are also at the start of a transition – a transition from the goal of maximizing pure precision to inclusive and responsible goals. In fact, given the prevalence of AI technologies in businesses, they will become essential elements of stakeholder capitalism.
IBM CEO Ginni Rometty was also present at the 2020 meeting, which, when asked about stakeholder capitalism in the context of the 4th Industrial Revolution, said it would be “the decade of trust.” It is essential that all stakeholders have confidence in the companies and the technologies they use. When it comes to AI, Rometty said it’s important to have a set of ethical principles (such as principles of transparency, mitigation of bias, and explainability) and that you should audit your business with of them.
Not all organizations will have embraced the principles of stakeholder capitalism as openly and publicly as Benioff’s Salesforce. However, companies still have traditional CSR (corporate social responsibility) requirements and in the context of AI, existing and proposed regulation also contains similar themes to those discussed in the context of stakeholder capitalism during of the World Economic Forum meeting.
Shortly after the ESG measures of stakeholder capitalism were announced in January this year, the US Department of Defense announced its Ethical Principles of AI in February. The European Union followed with a proposed AI regulation in April (which affects businesses inside and outside the EU), then the UK ad his orientation on the ethics of algorithmic decision-making in May. Look at these announcements (and the Algorithmic Accountability Act of 2019 in the US) and you’ll see many requirements, including those of governance, transparency, and fairness – requirements that clearly align with goals and metrics. stakeholder capitalism.
So a little over a year into this decade of confidence, what should businesses do? IBM introduced the role of Chief AI Ethics Officer, and Deloitte gives lots of details on what this role entails. Not all companies will be quite ready for this role, but they can start by documenting their ethical principles. As Rometty pointed out, it’s important to know what you stand for as a business. What are your values? These lead to the formation of a set of ethical principles, which may lead you to create your own ethical AI framework (or adopt an existing one) for your business.
Again, drawing a parallel with the ESG metrics announced in January that shift stakeholder capitalism from rhetoric into verifiable action, you need to test and audit your AI systems against your framework to go further – beyond speech and demonstrate compliance (or lack thereof) of your AI systems with hard metrics.
Strong and verifiable ethics in AI should not be seen as contradicting your business goals. As Rometty said, “It’s not good for anyone if people don’t find the digital age to be an inclusive age where they see a better future for themselves.” Effective governance of AI ethics benefits all stakeholders and this also includes shareholders.
Stuart Battersby is Chief Technology Officer of Chatterbox Laboratories, where he leads a team of scientists and engineers who built the AI Model Insights platform from scratch. He holds a PhD in Cognitive Science from Queen Mary, University of London.
VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the topics that interest you
- our newsletters
- Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
- networking features, and more
Become a member