Navigating The Evolving Landscape of Digital Risk and Protectionism
“Computers are useless. They can only give you answers” — Pablo Picasso
Like many of you, as a fervent enthusiast of technology I’ve witnessed its transformative impact on our lives, streamlining daily activities and fostering convenience. Technology’s continuous evolution, particularly in areas such as healthcare, has revolutionised our societal well-being and longevity. Professionally — at Orange (now EE), at the Internet Advertising Bureau UK (IAB UK) and the Trustworthy Accountability Group (TAG) — I’ve had the privilege to work with and collaborate with major and emerging technology companies, always striving to build trust — a cornerstone of our society.
But let’s cut to the chase. As technology evolves, so do our concerns. The honeymoon phase is over, and we’re facing a reality check. To name just a few issues: there is unease about the impact on mental well-being (particularly amongst the younger generation), the unbridled expansion of Artificial Intelligence (AI), algorithm-driven polarisation, and the alarming rise of misinformation and disinformation (particularly in 2024 when half the world will vote in democratic elections). What might have been seen initially as separate concerns now seem to be a collective anxiety, pervading various aspects of our societal landscape. In fact, they’re infiltrating every corner of it.
To articulate my musings, I’ve embarked on a series of thought pieces — “ByteWise Insights” — with this being the inaugural entry. I welcome all feedback, comment, and debate.
Cyber-Patriotism: Defending the Digital Realm
As digital risks surge, so does the rise of ‘cyber-patriotism’ or ‘digital protectionism’. Think of this as national or regional borders in the digital realm.
Governments are increasingly inclined to adopt leadership and regulatory measures to safeguard their national interests. This invariably results in a fragmented landscape of disparate approaches (like with data protection for example), adding strain to technology providers and negatively impacting the customer experience. It not only fosters an uneven playing field but also fuels a competitive advantage in the relentless technology ‘arms race,’ as exemplified by the regulation of AI.
The internet transcends geographical boundaries, and while its development, advantages and impact may differ across regions and markets, they also exhibit a global influence.
Therefore, why is there not an international framework aimed at facilitating a collaborative global regulatory approach? There are broader inquiries we must pose as well, each underscoring the heightened significance of adopting an international perspective: Are we witnessing the repercussions of the digital environment in which we reside, akin to the awareness surrounding climate change? Are we nearing a technological ‘Rubicon’ moment? Is there a risk of succumbing to the sentiment of “upgrading machines but downgrading humanity,” as articulated by the Center for Humane Technology?
Embracing the Velocity of Technological Transformation
The breakneck speed of technological change may no longer be just progress; it’s a lifestyle.
Well-intentioned policymakers and legislators often struggle to keep pace. Understandably, consumer risks are prioritised over potential criminal misuse — for example, the use of people’s data versus the dissemination of business fraud or malware — but this may have adverse long-term effects on society. In this era of digital protectionism, a collaborative approach is the need of the hour.
The industry must come together and strive for global good practice. Or it risks facing regulations that could impede innovation and the potential benefits of emerging technologies. The current UK Prime Minister, Rishi Sunak, is on record saying that “only governments can tackle the risks posed by Artificial Intelligence.” This is a political perspective. There are alternative approaches.
Fostering Global Collaboration
The fragmented regulation of the internet, favouring or disadvantaging certain regions or markets, leads to the contemplation of a global approach.
International structures already exist, such as the Common Agenda of the United Nations or the World Economic Forum’s Global Coalition for Digital Safety. The three decades of COP meetings seeking to address climate change illustrate that this approach can be used to address critical issues (although some might argue this is not as effective as it could be).
If we put geopolitics aside, the UK’s AI Safety Summit in November 2023 could serve as a game-changer and a potential template if broadened to address other digital risks. The summit featured delegates from the Chinese government, marking a noteworthy stride despite the substantial differences in culture and approach. The idea of a ‘United Nations of the Internet’ has also been suggested and could provide a framework for global collaboration that could reshape the digital landscape.
Throughout the course of 2024, there will be a heightened focus on discussions surrounding ethical technology design and regulation. This discourse is expected to gain momentum across the globe, with numerous experts contributing their perspectives, leading to the emergence of innovative ideas and suggestions.
But a global approach is not just a suggestion; it’s a necessity.
Nick Stringer, a prominent global technology, public policy, and regulatory affairs adviser, has contributed significantly to the international application of brand safety standards. His extensive experience includes serving as the former Director of Regulatory Affairs at the UK Internet Advertising Bureau (IAB UK). Follow him for more insights on LinkedIn, Twitter/X, Medium, Threads, or here on Substack.


