AI ESG Ethical Dangers: A Slippery Slope

By James Brusseau, Skytop Contributor / September 7th, 2021 

 

James Brusseau (PhD, Philosophy) is author of books, articles, and media in the history of philosophy and ethics. He has taught in Europe, Mexico, and currently at Pace University near his home in New York City. He explores the human experience of artificial intelligence in the areas of privacy, autonomy, and authenticity, and as applied to finance, healthcare, retail, and media. In cooperation with the Frankfurt (Germany) Big Data Lab, he has produced ethical evaluations of AI startups, and his current applied ethics project is AI Human Impact: AI- intensive companies rated in human and ethical terms for investment purposes. 


Environmental, Social and Governance (ESG) investing was forged for the industrial economy with its polluting machines. Companies driven by artificial intelligence do not fit in. Part of the divergence is material – cement and smokestacks diverge from pixels and digital exhaust – but the significant difference is human. When Henry Ford promised customers they could have any color they wished so long as it was black, he was not proposing a color acceptable to every purchaser so much as eliminating individuality from purchasing. Ford did not want to know about customers’ unique preferences. The personal information was even counterproductive because making vehicles profitably depended on construing humans as monochromic and interchangeable, like the units rolling off the assembly line.  

The big data and the predictive analytics economy reverses the logic. It runs on personalization. Netflix does not aspire to generic movie recommendations for homogenized demographic groups. It aims at specific possibilities toward individual viewers at targeted moments. The burgeoning field of dynamic insurance does not cover population segments over extended durations. It customizes for unique clients and intervenes at critical junctures in their unrepeatable lives. AI coronary healthcare is less concerned with a patient’s age cohort than with tiny and personal heartbeat abnormalities that escape human eyes but not machine-learned analysis. In every case, identifying personal information propels innovation for companies employing machine learning at the core of their operation. 

The propellant explains why privacy concerns have surged in public conversations and corporate meeting rooms. It also means that within AI-human interaction, the most tangible perils are not captured well by standard ESG criteria. They are not measured as environmental toxins or institutional corruption or poverty. Instead, the risk is our own dataset. It is the information defining who we are – our habits, anxieties, beliefs and desires – that may be engineered to provide gratifying experiences and opportunities, but that can also be twisted to control where we go and what we do. 

The paradigmatic case is predictive policing because of the question it asks: Is my data liberating, or confining? Will the personal information gathered about me invigorate my life, or restrict it? Whether the AI is stationed at an airport security kiosk, or on the LinkedIn career platform, or the Tinder romance site, or behind the screen of an Amazon purchase recommendation, or underlying a mortgage loan decision, or inside a hospital emergency room, the question is the same.  

The consequences are significant for investors because the meaning of responsible finance is tilting: the gathering risks – and opportunities – of big data are increasingly concentrating around the effects of decreasingly private information. According to the Geneva Association for the Study of Insurance Economics, continuous collection and analysis of behavioral data will allow dynamic risk assessment and constant feedback to loop insurance providers with policyholders. Not only will the digital monitoring enhance risk measurement, it will also provide real-time insights into the insured’s behavior, along with tailored incentives for risk reduction. Already today these AI-powered enterprises are rolling out as peer-to-peer concepts (Bought by Many) and fully digital insurers (Oscar, InShared, Haven Life or Sherpa). What does this mean for users in real life? It means a skier standing atop a double black diamond run may wrestle with her vitality and her fear as she decides whether to descend, and in the midst of the uncertainty, she receives a text message reporting that her health insurance premiums will rise if she goes for the thrill. 

Which is good and bad. AI-powered insurance does increase autonomy and self-determination by providing clients more control over their policies: they can literally raise and lower their own prices. But the reason we have health insurance in the first place is so that we can take risks, like skiing the double black diamond run, and it’s easier to go downhill when insurers aren’t monitoring and hectoring in the background. So, does dynamic AI increase freedom in our real lives, or constrict it?  

Moving onto the financial level and to investment decisions, there is the corresponding question: Are ethically-engaged participants encouraged to invest in these companies because they succeed on the human level by providing insurance policy choices? Or, is the dynamic insurance industry to be rejected as antihumanist because it frustrates our opportunities to experiment with our own lives? There is no clear answer.  

What is certain is that the discussion is very different from those we have grown accustomed to associating with ESG finance. Instead of carbon production and exploitation of laborers, there is big data and users asymmetrically matched against the psychological force of predictive algorithms. The larger conclusion is that what makes AI humanism different from traditional ESG finance – and what requires a new and distinct model for ethical investing – is evaluation that begins with autonomy, unique persons, and our intimately identifying information. 

None of this means that traditional ESG is outmoded or that the challenges it confronts have subsided, but as data and algorithms increasingly command the economy, responsible investors will need to adapt, just like the fossil fuel behemoths and sweatshop manufacturers of the previous generations.

Previous
Previous

Post COVID-19: Do We Still Need To Be In The Room Where It Happens?

Next
Next

The Proposed SEC Rule Change Could Have Major Impact On Investors