Ukrainian AI startup Osavul, which specializes in cyber security, has attracted $1 million in investment

DOU
3 May 2023
9 min read
The article was published in Ukrainian media DOU, the biggest community of developers in Eastern Europe. Here is the link to the article.

Below you may find English version:

Fight against provocations, disinformation, and propaganda.
How the Ukrainian startup Osavul, funded by the Kosovan Foundation and partners, works.


With the start of Russia's full-scale invasion, many Ukrainian entrepreneurs began using their skills and capabilities to assist the country in the fight against the enemy. Among them are businessmen Dmytro Pleshakov and Dmytro Bilyash, who have been working together for ten years on the digital agency Septa and the marketing startup Captain Growth.In early 2022, the entrepreneurs started volunteering and helping government institutions combat Russian propaganda and disinformation. They later formed a team and launched the startup Osavul, specializing in cybersecurity, information environment analysis and assessment. We spoke with Osavul's CEO, Dmytro Pleshakov, about the project's history and how to combat Russian disinformation and identify IPSO (informational-psychological operations) in the early stages.

"Our project is highly R&D-intensive and has an unstable sales cycle."

The story of Osavul began with the start of the full-scale invasion when we gathered our volunteer team. Dmytro Bilash and I were eager to be useful and find our place in the volunteer movement to help the country. Since we already had contacts and experience in machine learning, people from various government organizations and teams responsible for information security started approaching us.

After February 24, the number of information attacks and so-called "provocations" against our country significantly increased. As a result, many institutions, both public and private, started working in the field of information security initially in a chaotic manner and later in a more systematic way.

In February of this year, we started operating as a commercial startup. Some members of the team were people who initially volunteered with us, while others joined later.

Dmytro and I are experienced entrepreneurs. This is not our first startup, so we understood that investment is needed to build a company. Our project is highly R&D-intensive and has an unstable sales cycle. The path from getting acquainted with a potential client to signing a contract can take years. We started the fundraising stage, talked to funds, Ukrainian startup fund and found a partner who shared our vision and values. That partner was the Ukrainian venture fund SMRK [founded by CEO MacPaw, Oleksandr Kosovan, as well as entrepreneurs Andriy Dovzhenko and Vlad Tislenko - ed.]. In May 2023, we announced raising one million dollars as part of a seed round of investments.

Currently, the company Osavul has three main products:
1. CommSecure - a software platform for assessing the information space, detecting and analyzing narratives, as well as informational threats in the media environment.
2. CIB Guard - a module for analyzing coordinated inauthentic behavior between accounts in media, messengers, and social networks.
3. InfoOps - an integrated service where the company provides both software and takes care of analytics and counteracting disinformation. It also tracks the dynamics of narrative spread, sources, and dissemination channels.

"We help identify and evaluate potential information threats."

Since February 24 of last year, we have been collaborating with a number of government agencies and institutions. For security reasons, we cannot disclose the names of all organizations, but we can publicly mention the Сenter for Countering Disinformation (CCD) under the National Security and Defense Council of Ukraine and the Ministry of Defense of Ukraine - they have a separate department dealing with situational awareness and information space analysis.

What do we do? We help identify and evaluate potential information threats.

In the cycle of combating information threats, there are three main steps:
1. Threat identification: Analyzing the information space and responding to signals that are markers of its probable emergence.
2. Threat assessment: Analyzing threat parameters, primary sources, organization, coordination, and bot involvement.
3. Reaction: The final step directly handled by humans. This can involve communication regarding a specific "plant" or regulation at the state and platform levels.

When it comes to threat assessment, it is important to understand in a timely manner how influential it is. This is very serious work that cannot rely solely on subjective impressions. Sometimes a problem may seem massive because bloggers, journalists, and influencers have written about it, but its actual impact is much smaller. Conversely, threats may not surface yet but exist in the "information underground": in chats, Telegram communities, Viber, Discord. Such threats may not have surfaced yet but can be very dangerous.

We help search for and identify these threats. Our platform is used by analysts from organizations, companies, and institutions who, based on the information they receive about threats and markers of their danger, report this data to decision-makers. Osavul helps analysts work more efficiently and quickly.

The platform analyzes the information space from various sources, including social media, web environments, and Telegram. We are currently working on integrating Discord as well. We find sources that are conditionally closed [Telegram channels, private forums, etc. - Ed.], less transparent than websites and social networks - we gather everything that technical capabilities and APIs allow.

Our platform allows measuring quantitative and qualitative indicators of potential threats, finding primary sources, and analyzing signs of coordination. Organization and coordination are crucial indicators of an information threat. Analysts need to understand how the information originated and spread, who is behind it, who promotes it, and whether resources are invested in its dissemination. Our platform helps do this quickly and also highlights all important indicators that are understandable to every user: synchronized behavior over time, the spread of similar content, the interconnectedness of accounts in social traffic.

Later on, if, for example, it turns out that the "plant" was coordinated by someone, the user can gather all the evidence and forward it to platforms like Meta or Twitter for the removal of posts and suspicious accounts.

Artificial intelligence can find everything necessary, point out patterns and suspicious activity, but beyond that, it's up to humans. Fighting information threats cannot be 100% automated.

However, we provide a wide range of tools that significantly facilitate the process. Firstly, by collecting all possible data and analyzing large volumes of information that a person cannot process independently. Secondly, by automatically identifying patterns that indicate a probable threat. For example, AI cannot accurately determine whether an information campaign was coordinated, but it can provide.

"It is important to suppress misinformation at early stages."

Artificial intelligence in the Osavul platform operates in two main directions: content search and analysis of behavioral characteristics.

Currently, we mainly work with textual data. In our opinion, harmful narratives are predominantly spread in text format. We have all the tools that work with such data and extract narratives, emotions, sentiments, viewpoints, and more. Our technology helps to better navigate this content by searching not only based on keywords but also by semantic narratives.

As for the analysis of behavioral characteristics, it again concerns the patterns of content dissemination. There are many different indicators of coordinated actions that we identify, such as the time of message dissemination, account connectivity, and so on. We anticipate which texts may have the potential to become an information threat and provide this information to the user.

Unfortunately, we cannot publicly disclose much information about the attacks that we have successfully stopped using our platform. However, we can share one case.

At the end of last year, Russia attempted to launch another information campaign, claiming that Russians had obtained a document stating that NATO had supplied Ukraine with virus-infected donor blood.

This was a case in partnership with the Center for Countering Disinformation under the National Security and Defense Council. We helped detect this attack at an early stage when the information about the pseudo-document was mentioned in the network only 4-5 times. Colleagues from the National Security and Defense Council quickly analyzed the spread dynamics, prepared recommendations for government institutions, and based on them, promptly prepared a response. It was crucial to act quickly, within hours, as people tend to believe what they see first, rather than future refutations.

We often find information that is shared among Russians within the territory of the Russian Federation, initially residing in their chats and groups, and later, it spreads to larger players until it reaches the hypothetical propagandist Vladimir Solovyov. Later, this information spills over into the Ukrainian information space.

"Ukraine has made a significant leap in countering disinformation."

In Ukraine, there are analytical centers that help the government understand the direction of Russian propaganda. We also collaborate with some of them. These processes are decentralized, like most of what happens in Ukraine. This makes it more effective to fight threats. That's why Ukraine has made such a significant leap in countering Russian propaganda, even compared to 2022. Western partners learn from our experience, and colleagues regularly conduct training sessions. In a sense, this process is a model of ideal cooperation between a state and companies.

We have many plans. Currently, we only cover a small percentage of what we want to achieve. Of course, we are making a difference, but it is still insufficient.

One area of development is multilingualism. We want to support and analyze more world languages, both widely used and less common. This will help us monitor the Asian markets, which operate by their own rules. We don't want to work solely with Russian information threats, although it is currently the most important. Eventually, we will expand into the global space.

We live in a time when what seemed impossible yesterday becomes possible today. We monitor everything that happens with artificial intelligence and machine learning, adapt, and evolve. We want to do everything in a way that allows Osavul users to get the fullest picture and answers to all their questions, so that processes and research take days, not weeks.

Our team is not very large, with 14 people. Currently, we are looking for analytical specialists. We also plan to conduct our own investigations and publicly discuss important matters. So we are expanding the team with people who have experience in politics, understand information, and the.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookie Policy for more information.