Nonfinancial Performance of AI Intensive Companies: A Decentralized Approach

By James Brusseau, Skytop Contributor / January 25th, 2022 

James Brusseau (PhD, Philosophy) is author of books, articles, and media in the history of philosophy and ethics. He has taught in Europe, Mexico, and currently at Pace University near his home in New York City. He explores the human experience of artificial intelligence in the areas of privacy, autonomy, and authenticity, and as applied to finance, healthcare, retail, and media. In cooperation with the Frankfurt (Germany) Big Data Lab, he has produced ethical evaluations of AI startups, and his current applied ethics project is AI Human Impact: AI-intensive companies rated in human and ethical terms for investment purposes. 


Complaints of Conventionally Derived ESG Ratings  

Rating companies’ nonfinancial performance is now mainstream and commonly segmented as environmental, social and governance (ESG) finance.  Conventionally, ESG ratings are derived from corporate reports, due diligence  questionnaires, and extensive human labor. The specific metrics normally  derive from the United Nations Sustainable Development Goals (UN SDGs), or the Sustainability Accounting Standards Board (SASB). Regardless, the process is arduous as teams of human analysts review voluminous and  disparate information. Mistakes are unavoidable, and complaints about the  irregularity of the scoring across ratings agencies are common.  

To Decentralize and Objectify the Scoring 

To speed and normalize the process, innovators including Databricks, ESG  Analytics, and Truvalue Labs (recently acquired by FactSet) are  experimenting with natural language processing and machine learning. The  common goal is to decentralize and objectify the scoring: instead of insular organizations analyzing a few preferred data sources, vast public information is gathered, rendered comprehensible and organized by artificial intelligence. And, instead of fallible human evaluation, the scoring is accomplished by the  steady and constant workings of algorithms.  

Divergence Between Traditional Practices and Artificial Intelligence 

At the same time that the methods of performing ESG ratings are diverging between traditional practices of analysis and machine learning, so also the larger economy is splitting, and along the same line. Increasingly, our largest companies – and also our smaller but most ambitious – are driven by artificial intelligence. From Facebook and Amazon to dynamic insurance companies like Lemonade and healthcare innovators like Cardisio, our lives are increasingly subjected to big data and algorithms. Consequently, the nonfinancial worries of the industrial economy are giving way to humanist technological preoccupations. Worries about factory labor conditions and toxic waste are giving way to debates about filter bubbles and privacy violations.  Instead of the seventeen categories listed as the United Nations Sustainable Development Goals, what matters now is whether technologies promote human autonomy and protect user privacy, and whether they contribute to the values outlined in recent benchmark documents, including the European Commission’s Ethics Guidelines for Trustworthy AI and the Opinion of the German Data Ethics Commission 

Shifting Nonfinancial Corporate Performance Metrics 

The takeaway is that the metrics of nonfinancial corporate performance are  shifting quickly as the fourth industrial revolution gathers steam. What has not  changed, however, is the duality of approaches. For investors seeking  nonfinancial performance information about companies operating in the AI  economy, there are two kinds of sources. There is the traditional, top-down approach where expert human analysts grind through corporate reports and  solicit feedback in due diligence questionnaires. And there are efforts modeled on the machine learning approach to traditional ESG investing. This strategy works from the ground up by using natural language processing to analyze voluminous public information.  

The Difference Between Centralized and Decentralized Rating of  AI-intensive Companies 

Centralized AI rating works from the top down, from experts and their  determinations down to users and their actions. An illustrative example  emerged from the Frankfurt (Germany) Big Data Lab in 2020. A PhD-level  team of philosophers, computer scientists, doctors and lawyers united to  approach AI-intensive startup companies in the field of medicine, and to  collaboratively explore the technological development’s ethical aspects. The  group’s work initiated with lengthy deliberations guided by AI ethics  principles, and eventually concluded with case-study reports. They have been  published in academic journals where they may be accessed by ratings  agencies and converted into a snapshot of the technology’s humanist – as  opposed to purely financial – profile. So, the process starts with high-level  experts and works down to investors along a timeline of months and years.  

Decentralized AI evaluation starts from the ground up. Instead of expert minds and high-level discussions, the process begins with common and public  information. Companies routinely publish quarterly statements which may include details about installed privacy protections, or efforts to ensure that  their products work fairly across diverse populations. There are also news  reports and investigative journalism which may reveal a platform’s sloppy   

privacy safeguards or oppressive censorship practices. Then there is the  endless flow of social media where users relate their own experiences with AI  in medicine, banking, insurance and entertainment. Regardless of the domain, the  process starts with voluminous and accessible information about real  technologies functioning for tangible human beings in diverse circumstances.  

The second element of decentralized AI evaluation is universal scoring.  Instead of a boutique service offered at certain times to score a select group of  companies and products, the gauging of a technology’s nonfinancial  performance occurs always, and everywhere. Textual machine learning  relentlessly filters unstructured public data for indicators that reveal how  specific technologies are affecting human lives. And the process scales.  Scoring any company’s nonfinancial performance becomes automatic: AI  continuously applies AI humanist metrics to AI-intensive companies.  

The Challenge of Training Artificial Intelligence 

Training AI for this task is a challenge. Nevertheless, initial efforts began in September 2021 at the University of Trento, Italy. The project builds on six ethical principles widely recognized as well-tailored to the artificial intelligence interface with human experience. They are individual autonomy, individual privacy, social wellbeing, social fairness, technological performance and technological accountability. The idea is that machine learning can continuously and broadly scour public data and detect whether an AI is serving or subtracting from these principles. This is the aspiration; instead of human experts occasionally analyzing a specific technology, we have ethical information about every mainstream digital technology flowing all the time.   

The Implementation of Decentralized AI Evaluation 

The third aspect of decentralized AI evaluation is implementation. As a  financial tool, the premise is that sustainable economic success will accrue to  those companies and technologies that serve human purposes as opposed to  nudging, manipulating, or exploiting users. Concretely, humanist AI is  technology that supports user autonomy, ensures data privacy, operates fairly, contributes to social wellbeing, and performs well and with accountability. These ethical qualities both cause and also predict economic profit. Stronger, the prediction becomes increasingly confident as accurate information about the  ethical performance of technologies becomes more accessible.  

Translating Machine Learning Analyses For Everyone  

While still in its formative stages, the project in Trento, and as it extends to the AI Human Impact platform, is to translate the findings of machine learning  analyses into results that are open to all, and into a format that is meaningful in financial terms. The result will be that individuals – including those managing their own money through fintechs – can directly and intelligently locate and then participate in those AI-intensive companies that are most promising because they are human-centered. So, as opposed to a top-down approach where a narrow range of experts and regulators shape our collective  technological future by deciding what everyone else is allowed to do, now, the  way that diverse people freely respond to the technology they find around  them is the shaping of investment guidance, and of future innovation.  

The Trajectory of Future Investment and Development 

Measuring the nonfinancial performance of artificial intelligence powered  companies can be accomplished through decentralized AI ethics. Which  means, first, that the source of the evaluation is not narrow human experts so  much as common, public data. Second, the evaluation does not occur through  arduous human discussion, but in real time and as constantly updated by  natural language processing and machine learning. Third, the implementation of humanist standards is not executed through governmental or regulatory  authorities, but by independent users making informed, personal decisions that  collectively shape the trajectory of future investment and development. 

Previous
Previous

Must Go Big: Cyber Workforce Development Currently a Fade

Next
Next

No Kicking the Can in Cyber Defense: Invest Wisely Now or Pay in Multiples Later