Skip to content
Legal Technology

Using AI and emerging tech to battle disinformation & protect digital identities

Aileen Schultz  Senior Manager / Labs Programs – Global / Thomson Reuters Labs

· 5 minute read

Aileen Schultz  Senior Manager / Labs Programs – Global / Thomson Reuters Labs

· 5 minute read

While AI & emerging tech has made it easier to promote disinformation, these same technologies can actually help detect disinformation & secure identities

Digital communications have put a spotlight on the prevalence of disinformation, including deep fakes, and the possibility for political upheaval that they carry. Industries turning to virtual means of conducting activities have opened a whole new era of cybersecurity considerations and uncharted territory for corporations, and of course, their legal departments.

With this specter of disinformation and fake news so prevalent — especially during this time of pandemic, economic crisis, and political and societal unrest when misinformation and rumors are more easily spread — these issues are rightly leading many to fear the potential for bad actors and even bad governments using these tactics amid the promise of technological progress. And although these concerns are certainly valid, there are ways in which new technologies actually could impede the abilities of governments or other bad actors to promote disinformation and foment corruption. For example, technology advancements in detecting disinformation or building more secure identity infrastructures are becoming more prevalent.

Detection of disinformation

It seems to be increasingly common to hear of disinformation scandals; and disinformation has been used in political campaigns to sway voter decisions and to manipulate news reports.  Indeed, there has been a 150% rise from 2017-2019 in the number of countries that have experienced disinformation campaigns using social media, according to a 2019 study conducted by Oxford Universities’ Internet Institute. Further, the complexities and interest in deep fakes — machine-generated or manipulated media — are rising, with 150 academic papers published on the topic in 2019, compared to just three in 2017.


Join national security expert Clint Watts and Gina Jurva of Thomson Reuters for an exclusive panel discussion, Social Media Manipulators & the Future of Influence and Trust, available on-demand now.


Fortunately, artificial intelligence (AI) engineers and researchers are working toward solutions to combat these issues. For example, significant headway has been made in algorithmic detection of disinformation across social media. One of the most common sources of disinformation are fake profiles on social accounts, and social media giants use machine learning (ML) to combat this issue. For example, Facebook removed more than 1 billion accounts in 2019 that were determined to be fake. It did this through the use of an ML model called Deep Entity Classification (DEC), which learned to detect fake accounts by assessing the patterns in how these profiles built connections through Facebook. Reportedly, Facebook has been able to keep fake accounts usage to less than 10% of active users per month. The team at Thomson Reuters also is working on the continued improvement of its abilities to detect disinformation, particularly when it comes to deep fakes in video content.

Another excellent example is how Twitter is incorporating misinformation guidance into its user experience design. In May, Twitter announced feature enhancements that would help to combat this issue in response to the mass misinformation that was circulating about the pandemic. The interface additions would help users identify when content doesn’t align with trusted sources, or how to obtain more information on the topic. While these improvements are specific to misinformation about the pandemic, these changes highlight some of the ways Twitter and other social media platforms can continue to support the fight against misinformation.

Blockchain & digital identity systems

Protecting an individual’s digital identity is another area of heightened awareness when it comes to thwarting bad actors and rogue governments — and it’s another area where technology is helping. Interestingly, though not surprisingly, developing nations are leading the way in terms of digital identity adoption and infrastructure. As of 2016, most developing nations had digital identity systems in place, according to the World Bank. This is not surprising when we consider the reasons digital identities are favored over traditional identity systems — with benefits like heightened security, decreased identity fraud, service accessibility, and of course overall operational efficiencies, and cost effectiveness. However, these systems are not without their problems. For example, often there are data privacy concerns, a lack of trust in governments, and a lack of interoperability among frameworks. However, new technologies, like blockchain, are offering promising solutions to these problems.

Indeed, decentralized ledger technologies (DLTs), including blockchains, can do a lot more than enable the crypto-economy. In fact, some of the most promising impacts these technologies could have is to enable better government infrastructure, for example, by increasing security and interoperability. One of the best-known use cases of intergovernmental adoption of blockchain is within the infrastructure of supply-chain management to ensure privacy, immutability, and auditability of the data being processed through these vastly complex networks.

Some governments are considering the use of blockchain as a basis of their digital identity frameworks; and at least one country, Estonia, is completely blockchain-enabled. In fact, Estonia has been leading the way in digital government transformation for some time now; and last year Catalonia, Spain, reportedly substantiated plans to create decentralized identity infrastructure, seemingly to facilitate separatist movements. While the politics of this particular case may be fraught with complexities on both sides, this example does help illustrate the promise of these technologies to transform government systems.

Still, even as there is much progress to be made in these domains, there have been promising developments. Most governments, for example, have adopted or are planning to adopt similar data protection frameworks to the European Union’s General Data Protection Regulation (GDPR), and many governments have created task forces that are dedicated to the research, development, and adoption of new technologies such as AI and DLTs.

While regulation and well-informed strategies are nascent, if this progress continues as it looks like it will, this could suggest that we are approaching a future in which new technological innovations could help sideline bad actors and rogue governments, greatly transforming information and identity protection for the better.

More insights