I’ve spent half my life looking at code and the other half looking at how people break it. But in recent years, the "code" isn't just C++ or Python; it’s the fabric of our social reality. Being from Ukraine, I don’t view information warfare as a theoretical concept found in textbooks or PhD dissertations. I see it when I open my news feed, when I talk to relatives, and when I look at the server logs of attacked institutions. The noise is constant. It’s calculated. And if you don't know what you're looking for, it’s invisible.
We are moving past the era of simple "fake news." We are dealing with complex, automated architectures designed to dismantle trust. To fight back, we need better tools and a sharper mindset. If you are serious about protecting your organization or your country, you need to look at solutions like Osavul, which are built to handle this scale. But tools are only as good as the operator. You need to understand the threat first.
What does FIMI stand for?
In the security community, we love our acronyms. But this one matters. FIMI Foreign Information Manipulation refers to a pattern of behaviour that threatens or has the potential to negatively impact values, procedures, and political processes. It is manipulative, intentional, and coordinated.
Notice I didn't say "lies".
That is the biggest mistake people make. They think FIMI disinformation is just about checking if a fact is true or false. It’s not. FIMI is about behaviour. An actor might use perfectly true information but release it in a coordinated way, out of context, using fake accounts to amplify it at a critical political moment. That is FIMI.
The European External Action Service (EEAS) defines it clearly: it involves state or non-state actors, often acting as proxies. It’s not just a teenager in a basement; it’s a factory. When we analyze FIMI disinformation, we aren't just fact-checking; we are hunting for the network, the intent, and the coordination behind the message.
Is foreign information manipulation and interference a major security threat to the EU?
Absolutely. And not just to the EU—it’s a global fire. But the EU has become a primary testing ground. According to the latest data I’ve reviewed from the 3rd EEAS Report, there were 505 distinct FIMI incidents recorded just between late 2023 and late 2024.
Think about the scale of that
We aren't talking about a few stray tweets. We are talking about 38,000 unique channels involved in these attacks. The targets are precise: Ukraine remains the biggest target, obviously, facing almost half of these incidents. But France, Germany, and the United States are getting hammered too.
Foreign Information Manipulation and Interference is used as a prelude to kinetic action—real-world violence. We saw this in my home country. We see it in the Sahel region of Africa, where Russia uses these tactics to displace Western influence and legitimise its military presence. It targets elections, like the vote in Moldova, and it targets institutions like NATO and the EU itself. So, is it a major threat? It’s a security crisis. It drives wedges between allies and radicalises local populations. If we ignore FIMI disinformation, we are essentially leaving our back door open during a burglary.
To give you a better perspective on how the defence against this threat has been built over the years, I have compiled a timeline of the EU's strategic response. This shows we are not standing still, but the adversary is moving fast.

Timeline - A decade of building defence
2015 — Establishment of the East StratCom Task Force (ESTF): The first dedicated EU body created specifically to address Russia's ongoing disinformation campaigns.
2022 — Adoption of the Strategic Compass: Introduced the "FIMI Toolbox," a catalogue of instruments to detect, analyze, and respond to FIMI Foreign Information Manipulation.
2023 — 1st EEAS Report on FIMI Threats: Established a standardised Methodology for investigating FIMI disinformation activities based on open-source analysis.
2024 — 2nd EEAS Report on FIMI Threats: Launched the Response Framework to coordinate evidence-based responses among EU institutions and partners.
2024 — First EU Sanctions for FIMI: In December, the EU imposed the first-ever sanctions specifically for this behaviour, moving from analysis to punitive action.
2025 — 3rd EEAS Report on FIMI Threats: Introduced the "FIMI Exposure Matrix" to attribute covert networks and map the full architecture of Foreign Information Manipulation and Interference operations.
The mechanics of the machine
To catch these actors, you have to think like them. You have to understand their infrastructure. They don't just post from one official account. They build what we call a "FIMI architecture."
Recent investigations have exposed massive digital arsenals. For instance, we’ve seen the "Doppelgänger" campaign, which clones legitimate media sites to trick readers. We’ve seen "Portal Kombat," a network flooding the zone with automated pro-Kremlin content. These aren't random; they are structural.
When I analyze a potential FIMI disinformation campaign, I look for the "kill chain"—the entire sequence of events required to launch an attack. From the creation of fake assets to the laundering of information through different layers of the web, every step leaves a trace. We track technical signatures like IP addresses and medical fingerprints, and we track behavioural patterns like coordinated posting times.

What are the 4 types of threats?
When we map out the adversary, we shouldn't just look at "bad guys." We need to categorise them by their relationship to the state actor. The new "FIMI Exposure Matrix" breaks this down into four distinct blocks or types of threat actors within the architecture:
- Official State Channels. These are the overt ones. The Ministries of Foreign Affairs, the embassies, and the official spokespeople. They are the "white" propaganda. They set the narrative openly.
- State-Controlled Outlets. These masquerade as media but are funded and editorially directed by the state. Think RT, Sputnik, or CCTV. They have a massive budget and a global footprint, yet they pretend to be journalistic enterprises.
- State-Linked Channels. This is where it gets murky. These operate under state oversight but hide it. They might be run by intelligence services or proxies. We often catch these through financial records or shared backend infrastructure that they forgot to scrub.
- State-Aligned Channels. These are the "grey" zone. They aren't officially on the payroll (that we can prove yet), but their behaviour is perfectly synchronized with the state. They amplify the FIMI disinformation narratives because of ideology or opportunistic alignment.
Understanding these four types is critical because Foreign Information Manipulation and Interference relies on the interplay between them. A lie might start in a State-Linked channel, get amplified by State-Aligned bots, and finally be quoted by an Official State Channel as "what people are saying." It’s information laundering.
Technical vs. Behavioural Detection
How do we actually catch them? In my work, I rely on two buckets of evidence: technical and behavioural.
Technical evidence is the "smoking gun." It’s the hard data. I’m looking for shared hosting services, identical Google Analytics codes on supposedly different news sites, or sequential account creation dates. For example, in the "HaiEnergy" campaign linked to China, we found networks of inauthentic websites all tied to the same PR firms through technical links. You can't argue with an IP address.
Behavioural evidence is softer but equally damning. This is where FIMI Foreign Information Manipulation reveals itself through patterns. I look for "coordinated inauthentic behaviour" (CIB). Are 500 accounts posting the exact same phrase within one minute of each other? That’s not a coincidence; that’s a script. Are they translating content using the same AI errors? We call this "content recycling" or "flooding".
We also look for "cross-platform" activity. A campaign might start on one social media, jump to a fake news website, and then get blasted out on other social media. If you aren't monitoring cross-platform signals, you’re missing the big picture.

The Human Element
We can have all the AI detection models in the world—and trust me, AI is changing the game for both attackers and defenders—but it comes down to the human analyst.
We need to be able to attribute these attacks. Attribution is hard. It’s a mix of technical forensics and political judgment. But we are getting better at it. We can now link specific campaigns, like the "False Façade" operation, directly to Russian actors. We know that Foreign Information Manipulation and Interference isn't just random noise; it is a weapon system.
The attackers are evolving. They are using Generative AI to create deepfakes and massive volumes of text content cheaply. They are using "influence-for-hire" firms to hide their tracks. But every time they move, they leave a footprint.
Conclusion
Detecting FIMI disinformation requires a shift in perspective. You stop looking at the content as "news" and start looking at it as "payload." You stop looking at the poster as a "person" and start looking at them as a "node."
It is a grind. It is technically demanding. But for those of us in the field, it is the only way to keep the lights on in our democracies. We watch the networks. We map the connections. And we expose the architecture. Because in this war, sunlight really is the best disinfectant.








