Misinformation and Disinformation and their impact on the future of PR and Communication – Your Questions Answered

AMEC hosted an Expert Panel discussion during AMEC Measurement Month 2024 looking at the challenges and impact of misinformation and disinformation on the future of PR and Communication, with Massimo Moriconi, Omnicom PR Group Italy and ICCO, Nesin Veli CEO Identrics, Rafi Mendelsohn VP Marketing Cyabra, the panel was moderated by AMEC CEO Johna Burke.

Rafi Mendelsohn, VP Marketing at Cyabra and Nesin Veli, CEO, Identrics hav responded to some of the attendee questions we just did not get time to cover in the discussion and a summary and link to the original recording can be found below:

Your Questions Answered

  • Q: Cyber-attacks are a growing threat now along with disinformation campaigns – and it calls for a specific resilience building. What would be your recommendation for preparedness and reaction?

Rafi Mendelsohn saidBeing prepared for disinformation campaigns that impact brands requires a different approach compared to other communications challenges. Particularly for medium and larger sized companies with an established and larger brand presence, the key is recognizing that attacks can come swiftly and at scale. Even the best social media teams or crisis comms experts may struggle to identify and respond to such attacks quickly. Bad actors are often one step ahead, leveraging advanced planning and AI tools.

The good news is that uncovering and fighting such threats is possible and we advise taking a few initial proactive steps: 

  • Monitor in real time mentions of your brand, your products, subsidiaries, and executives, as well as wider issues related to the business or sector.
  • While monitoring, track the movement of sentiment and engagement, particularly for spikes.
  • For each of the topics you are tracking, keep close attention to the number of fake accounts involved in the conversation as inauthentic accounts or content is a key indicator and an early warning sign. 
  • When analyzing inauthentic accounts, behaviors, or content, pay particular attention to whether other fake accounts are also engaged in the conversation. Even if these accounts haven’t yet achieved high levels of engagement, any signs of coordination is an indicator of dangerous intent that should be at the very least monitored closely, and possibly even require taking action.

Even a small number of fake profiles can manipulate narratives, sway public opinion, and harm a brand’s reputation. But taking steps to proactively monitor and detect early are essential for better reaction strategies.

With detection comes empowerment. A brand can decide whether they want to be more proactive, such as reporting harmful activity directly to social media platforms or including evidence of disinformation in a brand’s public response. Conversely, communications pros can also be more confident in deciding to ignore or continue to monitor if they haven’t yet seen enough evidence of inauthentic coordination. Brands can also be more confident when engaging with customers online by only responding to authentic accounts. Why give bots the attention they don’t deserve!

 By leveraging AI-driven tools and focusing on authentic voices, you can protect your organization’s reputation while staying one step ahead of bad actors.”

Nesin Veli said “The technology is new, but the underlying principle – not so much. Security experts have always accounted for competitive build up which now starts to get wider awareness with high-risk narratives targeting societal issues.

And herein lies the asymmetry of LLM powered AI hacking – the social aspect of it will be a much bigger menace at first and will increase malignant social engineering efforts as these models will only get better at mimicking human interactions.

Application in system exploits will remain in the more narrow area of professional security, but a wider front of collaboration for awareness and knowledge exchange should be a priority. As in most cases with current AI, we have to solve for scale, rather than for novelty.”

  • Q: Do you see the solution in pre-bunking disinformation as an instrument to work with FIMI?

Rafi’s response to this question was “Pre-bunking disinformation is undoubtedly a valuable method for addressing online manipulation and interference. However, its important to recognize its limitations. Often, by the time a false narrative has been identified, it has already spread across multiple social media platforms, including within communities and groups that may not be visible through monitoring tools.

As the resurgence of fake news (such as vaccine misinformation and conspiracy theories) demonstrates, a false narrative can persist indefinitely and resurface at any time. This underscores the importance of complementing pre-bunking and debunking efforts with broader strategies.

When addressing Foreign Influence and Manipulation Interference (FIMI), it is critical to:

  • Detect and analyze the fake profiles operated by bad actors who manipulate online discourse.
  • Evaluate their impact on conversations, particularly on authentic audiences to understand the scope of their influence.
  • Develop comprehensive counter-strategies that go beyond reacting to just false narratives.

Nesin Veli responded “Sure, this is another example of a technique for governments and the private sector in PR and communication experts, as well as the media, to take initiative by proactively communicating with their populace and audience. 

Here I would be more wary of what is the shared terminology of the communication and structure of the data which has to encompass different public and private sector players working towards the same goal.”

  • Q: Whilst elections are concluded is there still the prospect of Disinfo/AI to poison democratic discourse? What should we look for?

Rafi answered “Fake campaigns and influence operations don’t end when the election is over. In fact, they often begin years in advance. An effective campaign designed to influence elections is a long-term strategy.

While state actors may publicly favor one candidate over another, their ultimate goal is far broader: to sow doubt, confusion, anger, and mistrust in public institutions. Their success doesn’t hinge on any single candidate winning. Instead, it’s measured by their ability to weaken trust in the very foundations of society.

The adoption of AI tools by bad actors has made creating fake campaigns exponentially easier, cheaper, more automated, and, most importantly, harder to detect. As these tools evolve, disinformation and manipulation are only expected to grow in scale and sophistication in the coming years.

Thankfully we also use AI for good, equipping brands and governments with the tools they need to identify influence operations and respond effectively. The fight against disinformation cannot wait – it demands proactive, AI-driven solutions to protect public trust and the integrity of democratic processes and online discourse.”

Nesin responded “Elections are hardly concluded for some nations, especially the ones on the carousel of unstable administrations. And they are an example of the disconnect between private companies knowing the information space and bringing forth empirical evidence of information campaigns taking place; and the political status-quo with a half-life of six months not willing to take the risk to alienate any of the potential votes for the elections around the corner by speaking about potentially polarising topics.

Such impasses historically have always been solved by broad collations, often aided by civil society. So, I think close collaboration with such groups in pulling attention to the risks and realities of AI-lead interference in the national discourse is important.

One tangible thing is to know when and where AI is used, which cannot happen without regulation and I’m looking forward to seeing how this will be implemented in the EU.”

Our sincere thanks to the panelists that contributed to this fascinating discussion in a critical topic area, and special thanks to Rafi and Nesin for taking the time to answer the additional attendee questions.

You can now view the Misinformation Disinformation webinar on-demand here and a summary of what the panel of experts covered is below, a not to be missed session full of insights.

  • The panelists discussed the challenges that communications and PR professionals face in dealing with misinformation and disinformation, which can significantly impact brands and organizations.
  • There was a debate around whether it’s better to use the terms “misinformation” and “disinformation” versus simply calling out “lies.” The panelists noted the nuances and complexities in distinguishing between these terms.
  • The role of PR and communications professionals in navigating this landscape was emphasized. They need to be proactive, leverage technology and tools, and collaborate across the information ecosystem to combat misinformation.
  • Specific recommendations included:- Developing preparedness plans to quickly respond to misinformation attacks – Leveraging media monitoring and analysis tools to understand the source and spread of false narratives – Engaging in educational campaigns to improve media literacy among stakeholders – Coordinating with other organizations, fact-checkers, and platforms to counter misinformation
  • The panelists shared examples of how misinformation and disinformation tactics have evolved, including the use of AI-generated content, deceptive imagery, and coordinated bot networks to amplify false narratives.
  • Overall, the discussion highlighted the critical role of PR and communications professionals in defending against the growing threat of misinformation, which requires a multi-faceted, collaborative approach.