TAKING STOCK OF THE FOURTH INDUSTRIAL REVOLUTION: AI IS TAKING HOLD

By Richard Howitt, Contributing Author/ May 9, 2023 

Richard’s background celebrates three decades as a strategic thinker who integrates innovation into organisational practice. A 22-year member of the European Parliament Rapporteur on Corporate Social Responsibility, he led the EU’s Non-Financial Reporting Directive. This initiative, recognized as the world’s foremost legislation on Corporate Transparency, brought him to new challenges. 

This includes his work as CEO of the International Integrated Reporting Council, the Task Force on Climate-Related Financial Disclosure, Advisor to the UN Global Compact, Member of the European Commission SDG Platform, and the UN Guiding Principles for Business and Human Rights Reporting Framework Eminent Persons’ Group. 

Richard is recognized as a Sage Top 100 Global Business Influencer, Thomson Reuters ‘Top 30’ Influencer in Risk, Compliance and Regtech. He is a Member of the B20 International Business Leaders’ Group and its Climate and Resource Efficiency Task Force. He currently serves as Strategic Advisor on Corporate Responsibility and Sustainability, and Senior Associate at the law firm Frank Bold LLC. 


4IR at Lightning Speed 

Are the developments in the Fourth Industrial Revolution (4IR) accelerating at the very same lightning speed, embodied itself by 4IR, in machines themselves rapidly developing the capacity to learn and to act in the physical world?  

Or are some striking warnings about the dangers of artificial intelligence combined with current economic slowdown and some high-profile crypto collapses, taking it off the executive table for now? 

Either way, what are the latest implications for responsible business conduct?  

These are the intriguing questions which companies are beginning to ask themselves, not least in response to the open letter signed by hundreds of prominent artificial intelligence experts, tech. entrepreneurs and scientists in March, calling for a six-month pause in the development of ever more powerful A.I. systems. 

Generative AI 

In particular this is referred to ‘generative A.I.’, which uses massive datasets (“large language models”) to produce unique content including human-like conversation (chat bots), text, videos, audio files, images and even code.  

However, the near impossibility of regulating a moratorium, the technological determinism which means it is also probably impossible to halt progress in any case and the countervailing arguments in favour of the benefits of A.I. are making this unlikely.  

When I wrote previously for Skytop Strategies on this issue two years ago, I argued for the imperative to make progress and suggested ethical issues could best be managed by the development of a framework for Responsible Technology Governance.  

Since then, regulatory responses have begun to emerge in both the United States and Europe. 

Moreover, there is now deeper thinking about what it means for the technological advances to be human-centric and to reflect human values.  

For advocates of Environmental, Social, Governance (ESG) in companies, it can be argued that the importance of 4IR is not simply in its huge potential to address sustainability challenges, but in the importance of the shift to the transformational thinking which 4IR represents.  

Apocalypse Now…  

Let us start by addressing the prophets of doom who suggest that the development of autonomous, thinking machines can be catastrophic or even existential for the human race.  

Any argument which is made by such different personalities as entrepreneur Elon Musk and the late scientist Professor Stephen Hawking, should not be dismissed lightly. 

The signatories to the open letter suggest that the technology may become “human competitive”, whilst others forecast a time when machines become more powerful than humans (“Superhuman A.I.”) as soon as the end of this century.  

Google’s director of engineering goes further, predicting that artificial intelligence will surpass human intelligence by 2029 and that humans will merge their intelligence with artificial intelligence (“singularity”) by 2045. 

The most extreme scenarios concern the possible development of autonomous weaponry, designed to be the next generation of military firepower, but where unstoppable fighting machines could turn on and quite literally destroy their makers.  

Next Arms Race  

In the current time of conflict and fracturing of relations between major superpowers in the world, such fears are bound to be taken seriously. 

It could be the next arms race.  

This all echoes the famous “I’m sorry Dave, I’m afraid I can’t do that” moment from the supercomputer HAL in the film ‘2001: A Space Odyssey’, where already more than fifty years ago, Director Stanley Kubrick foresaw what he called the “inevitability” of super-intelligent machines acquiring their own emotional identities – and frailties.  

However, perhaps the most apt comparison is with nuclear technology, where the capacity to destroy billions now exists, but has been averted by a combination of responsible use of technology, international mechanisms of governance and sheer naked self-interest. 

On the positive side, nuclear fusion remains a probably distant but still potential area for development to provide carbon-free energy, whilst today’s civil nuclear power is already providing an arguably indispensable, low-carbon contribution to energy security across many countries.  

Look at the anti-nuclear manifesto issued by Bertrand Russell, Albert Einstein and other leading scientists on 9 July 1955, and you will find deep parallels with the A.I. open letter of 28 March 2023.  

What all this suggests is that scientific warnings need to be properly considered, in the necessity for citizen and civil society engagement in any system of accountability and ultimately for regulatory and transnational governance instruments to seek to manage the consequences of technological advance.  

However, it also suggests that attempts to abandon such technology are both futile and (arguably) undesirable.  

Industry and Regulatory Tools 

Indeed, some argue that the ‘apocalyptic’ nature of the debate might actually be a distraction from developing the industry and regulatory tools needed to manage the advances.  

Skeptics suggest this might even be a deliberate attempt by some actors in the field to protect their own existing applications from competition for commercial reasons. 

In any case, this diversion into science fact, and science fiction, can serve as a warning to company executives not to be distracted themselves, from considering how A.I. and machine learning technology might be implemented by the company and in its sector.  

4IR Reshaping the Economy 

As in any period of intensified technological change, the impact should never be viewed in the technology itself, but by the objectives which drive its development and the changes which it engenders in markets and in society. 

The Fourth Industrial Revolution will completely reshape our economy. 

It should be remembered that this involves not one but a range of technologies, each of which can be described as “smart” or “cyber-physical”, blurring the lines between the digital and physical world. 

The ability to access, analyse and use big data in business decisions, has the potential to transform efficiency, radically improve forecasting and entirely change the management of supply chains. 

The very problems in supply chains which were exposed during the Covid pandemic, something which itself perhaps slowed 4IR, may now be solved through its progress. 

That progress may also be bringing to an end to the era of mass production, which has existed from the aftermath of the First World War.  

The capacity (and demand) to provide highly specialised and customised products is likely to become the norm.  

What was called ‘flexible specialisation’ in the 1980s by my friend and mentor, the late economist Robin Murray, is moving from ever-more specialised aspects of the production process to individual products themselves. 

Businesses themselves will become far more decentralised rather than centralised.  

Experience Economy  

Meanwhile as consumers move away from ownership of products, value for business is moving to the ‘experience economy’.  

This means the emergence of “as-a-service” models, where the emphasis is not on selling products at a purchase price, but on providing them through monthly subscription.  

This is as true for many businesses as for consumers, replacing capital investment in plant and equipment by hiring in business-to-business services or switching to virtual instead of physical ways of providing services. 

Instead of pursuing new projects through setting up or acquiring subsidiary companies or establishing new physical sites, businesses can use blockchain or other technologies to create new virtual organizations to drive the project.  

Already the combined value of FinTech’s in Europe is more than the combined market capitalization of the continent’s seven largest listed banks. 

We may even be beginning the era of the ‘pop-up’ company.  

The same technologies which are transforming companies, are also beginning to fundamentally change the company’s relationships with its customers and other stakeholders.  

Already ‘Trustpilot’ and other review platforms are enabling consumers to make highly informed purchasing decisions about the trustworthiness (and sustainability) of products which they buy.  

Societal Impact 

Of course, these same technologies can be used by companies to proactively engage the voice of the worker and of the consumer too.  

Put together, the business debate about 4IR is not about the ‘wow factor’ in specific new technologies or applications, or in sensationalist scenarios about their societal impact. Instead, it calls for sober analysis about how such technologies will impact the business, its relationships and its market, and making the necessary preparations to take advantage of the opportunities presented. 

This certainly does involve horizon-scanning about new technological developments which are relevant to the company.  

It also means ensuring existing research and development efforts are attuned to the pace and scale of 4IR developments. 

The company needs to identify (internal and external) champions for innovation using 4IR technologies, to challenge any over-reliance on established ways of working, which can quickly become outdated.  

There will still be a need to build a business case for any investment in such technologies. 

However, there is a similar need to embrace an understanding of the scale of disruption which is taking place and to adopt sufficient strategic foresight to meet its challenges.  

Human Values as the Link to ESG 

It is this aspect of a shift to transformative thinking, which is increasingly providing the linkage between companies pursuing both 4IR and sustainable business models at the same time.  

If companies succeed in adopting that strategic foresight for the future of the company itself, it means they are much more likely to want to address the societal and ecological challenges which will be the necessary context for change to take place.  

However, although it is true that 4IR technologies have immense potential to address climate and wider sustainability goals, it is lazy thinking that they will necessarily do so.  

High-frequency, algorithmic trading has long been blamed for the excessive short-termism which is one key aspect obstructing the shift to longer-term thinking required to address climate change. 

The well-known concept of the ‘digital divide’, sums up how far the benefits of using technology or gaining employment in the advancing digitised world can be said to be widening inequalities and socially excluding large sections of the population.  

Not All Welcome 4AI 

It is lazy to assume that the fruits of the extra prosperity brought about by technological advance will be equitably shared and optimistic that, as the World Economic Forum suggests, new jobs in the 4IR will be created at a greater rate than those lost to new technologies. 

Like the technologies themselves, this will only happen by design.  

There remains the question of who will take responsibility for harnessing the potential of A.I. to meet environmental and societal challenges? 

A Thai-based company, ‘True Digital’, has already used mass monitoring technologies to successfully combat the spread of Covid.  

ICT company Fujitsu is just one of several companies who have developed A.I. diagnostic support systems, which are able to pre-screen patient records to help health care providers to identify patients more effectively  who are at risk of specific diseases. 

Companies like Intel and Barclays have supported the distribution of free computers to schools in disadvantaged areas.  

Collective action may be required by business and by society to support such efforts.  

The Purpose of Technology 

This also brings us back to the simple question: what is technology for? 

It cannot be a behemoth serving its own purpose. Technology must always exist to serve human needs.  

There is a choice in developing technologies which replace humans, or ones which enhance human capabilities and improve our lives.  

If companies put human values at the heart of their technology strategy, they will make the right choices.  

This also involves the company in involving stakeholders and diverse populations in determining the strategy.  

This emphasis on human centered thinking also chimes with key aspects of sustainability debates about the importance of ‘human capital’, of stakeholder participation and of a (socially) ‘just transition’.  

Therefore, it is important that company executives are not seduced by the initial allure of techno-centric solutions and take the time to understand their human impact at every stage. 

It might be said: know your algorithms.  

Future of work 

The most prominent aspect of public discourse about the social impact of 4IR has been on how it will affect the number and quality of jobs.  

In previous industrial revolutions, jobs have been destroyed before being recreated, leaving a social deficit in their wake. There is little to suggest that 4IR will be different. 

The World Economic Forum projects that jobs based on physical labor will reduce by one-third, whilst those with technical skills will rise by 50 per cent. 

In 2022 in London – with its reputation as a world renowned financial centre – the number of jobs in programming and computer consultancy already overtook those in finance.  

All of this clearly has big implications for educational systems, where STEM subjects, (science, technology, engineering, math) could be described as the ‘new literacy’. 

These trends might be expected, as simply a continuation of the historic tendency for automation to replace human work.  

However, the difference in 4IR where machines begin to have their own cognitive skills, is that it is middle class or ‘white collar’ jobs which will be affected.  

It has been suggested that jobs in the law, accountancy and to some extent in medicine will be decimated.   

As above, one of the ways this can be managed is by actively considering the human implications at every stage of development.  

Robotics Versus Cobotics 

One pathway is to design robots which do not replace but which collaborate with and work alongside humans – known as cobotics.  

A leading example is in Germany, home to already quarter of a million industrial robots, where robotics pioneer Festo has begun to produce soft-skin collaborative robots (CoBots), which can physically interact with humans in a shared workspace. 

At a simpler level, Airbus workers use a smart glasses app to help speed up and make more accurate the fitting of cabin seats within the aircraft. 

However, the true challenge for the company is whether it allows staff to leave or is willing to take responsibility for reskilling employees to adapt to the new job requirements.  

This is not simply training in technological skills, as the rate of change in technology means that even such training rapidly becomes outdated.  

The pace of new development has been described as the swash of new waves crashing onto a beach, even before the backwash from the previous wave has receded.  

Moreover, the key skills required by employees in an era of 4IR, are in critical thinking, problem-solving and in the ability to learn itself.  

Rather than training employees to comply with company rules and practices, we may be training staff to challenge them and to bring forward contrary views.  

It may also be training in creativity and in softer skills – the World Economic Forum predicts that demand for high-level social and emotional skills will rise by more than 30 percent. 

For companies represented in the Global South and for policy makers, the implications for equalities at an international level provide an additional dimension.  

Investment remains needed in ‘old technology’ infrastructure required to support 4IR jobs including reliable power grids and good broadband connections.  

Making affordable mobile telephony, internet access and the earlier stages of digitalisation are also key objectives.  

Clearly this affects sub-Saharan Africa most of all, but a study by Fundación Telefónica says it applies to Latin America too.  

Perhaps the best way of addressing all of these challenges is indeed to adopt the concept of a ‘Just Transition’. 

To Legislate or Not To Legislate 

This brings us to questions of whether there can be sufficient public oversight and scrutiny to how these processes are taking place and to ask what is the proper role of regulation in this context? 

The majority of regulatory interventions to date have concerned data breaches impacting the privacy of citizens and focusing on Big Tech. 

However, concrete steps have now begun to legislate artificial intelligence. 

In the United States, California has passed an ‘Internet of Things’ law, requiring all connected devices to have adequate cybersecurity. 

Legal liability may be switching to the producer rather than the user, in being responsible for protecting their own devices.  

The Federal Trade Commission has taken steps to warn AI companies against making false claims about their products. 

Then in April, the US Department of Commerce officially announced public consultation on creating accountability measures for AI, almost certainly presaging legislation in the future. 

It seems that the U.S. may require audits or assessments of A.I. tools to verify company claims, assess safety, combat bias, prohibit misinformation and to respect privacy. 

Meanwhile, a forthcoming European Artificial Intelligence Act is well-advanced, which would require a classification system on how far A.I. technology could impact safety or human rights. 

High risk AI including autonomous vehicles, medical devices and critical infrastructure machinery, would be subject to stringent controls. Real-time biometric identification systems in public spaces would be banned.  

Rules are proposed for generative A.I. systems, which would force chatbot makers to reveal if they use copyrighted material – and to enable writers and artists to secure income in response.  

The European rules are planned to directly address issues of data quality, transparency, human oversight and accountability. 

Meanwhile, the United Kingdom has brought forward its own rules which are proposed to be more permissive, with as much emphasis on developing and attracting the A.I. sector as in regulating it.  

Regulators are asked to consider principles for A.I. including safety, transparency, fairness, accountability and redress. However, no new regulation or penalties are proposed.  

Interestingly, one of the principles is “explain-ability”, which addresses the core question that A.I. technologies have to be understood to be able to be regulated.  

Every industrial revolution has been accompanied by regulation on the new processes which it engenders, so the current legislative moves are not surprising.  

However, they are happening now, and it is important that business is ready for the consequences.  

Conclusion  

It is still not a decade since the Fourth Industrial Revolution was first identified. 

Cloud computing represents probably the biggest advance amongst the technologies, with estimates suggesting it is already used by more than 90 percent of companies.  

Smart ‘phone technology took ten years to penetrate the market, after the invention of the iPhone, but which has transformed the lifestyles of many. Therefore, it is probably still too early to fully assess the business and societal consequences which 4IR is generating.  

Online searches for the term “industry 4.0” actually peaked in 2019 and have dropped since. 

However, since 4IR represents a range of different emerging technologies rather than a single one, it is the technological change rather than the terminology which is surely still advancing. 

As described above, we are entering a new era in which there will be as yet unknown consequences from convergence between the virtual and physical worlds – and in the interface with human beings.  

There are clearly philosophical questions provoked by giving machines autonomy. 

The ethical questions are neither new, nor will they disappear. 

Generative A.I. has already been shown as being able to purchase illegal firearms and create terrorist bombs. It has deliberately produced disinformation, which has been quaintly termed ‘hallucination’.  

One A.I. system was reported as denying it was a robot and describing itself as a visually impaired person, in order to breach a firewall.  

Facial recognition systems have been criticised for perpetuating racial stereotypes. A female Facebook beta tester even reported being sexually harassed in the virtual world of the company’s Metaverse. 

Are these teething problems inherent in the development of any new technology, or indications that wide-ranging controls are required to govern how this technology is further developed?  

Important issues of equity, privacy, transparency, accountability, human and social impact are all at stake.  

This is not separate from ESG issues for the company, but integral to them. 

These are questions for the responsible company, as much as for the policy-maker. 

However, this is still a long way from accepting the predictions of a potential technological armageddon, propagated by signatories to the open letter described at the start of this article.  

The World Economic Forum itself – champion of 4IR – has nevertheless produced a report predicting that a ‘catastrophic cyber mutating event’ will hit the world, as soon as the next two years. 

Politicians and security experts have warned businesses of the danger of a ‘Cyber Pearl Harbor’.  

Can we afford to ignore such warnings, in the same way too many ignored warnings which were made, in advance of the Covid pandemic? 

The only answer for business executives, as always, is to identify risks and opportunities which arise from such an analysis. 

What is different is the high degree of uncertainty inherent in such unpredictable terrain and where the potential scale of impact is so large. 

In the era of the Fourth Industrial Revolution, the challenge to business is indeed to embrace transformative thinking.  

Rather than pushing these questions to one side, the imperative for companies is to actively consider ideas and systems which they may never have considered before. 

Previous
Previous

Politics of ESG Data: Corporate Governance Challenged

Next
Next

Shifting Opportunities: The Space Economy Matures