At Antara, we had the unique opportunity to compare the results of two different strategies in the Digitization of the Intelligence Function, developed in two industrial companies. Let's see how these strategies differ, their results, and our conclusions.
In the Intelligence Function, a big volume of information is noise in itself, and it makes it difficult to find the key information for our organization (Unless we are simply aggregating data). Therefore, we need a mechanism to filter the information, and thus face only signals of specific interest. The design of the "critical factors of intelligence" (or "hypothesis" in the case of Mussol by Antara) allows us to define what we want to know.
In the case of a sieve, its total size and the granularity of the filter define the amount of material it filters. In the case of Mussol's hypotheses, the two factors that define the amount of information are the number of hypotheses and their design or specificity:
The design should be adjusted to the needs of the company. But these hypotheses can be more or less specific, gathering greater or smaller information sets.
The larger the volume of information captured, the more likely it is that non-interesting information will be filtered, and therefore we will have to filter it manually. This is something that we should avoid because our time is limited. Manually filtering the information takes precious time away from the analysis task, the one that adds value to the organization. But it is not only about improving the process efficiency, saving thousands of working hours per year, but also maximizing the impact of the Intelligence Function in the business, as we will see in the comparison.
Let's analyze the results of two different strategies in the Digitization of the Function of Competitive Intelligence, developed by companies that are totally comparable according to the following parameters:
The two companies decided to follow different approaches, even though they received the same advice. In one case -Company A- the number of knowledge areas under surveillance has been considerably higher (three times bigger), and also incorporates a thesaurus which is five times that of Company B.
Company A | Concept | Company B |
5 | Number of users | 5 |
80 | Number of hypotheses | 27 |
4.900 | Thesaurus terms |
1.000 |
44.300 | Processed documents | 1.109 |
9 | Readings recommended by the analysts |
58 |
3 | Identified opportunities | 29 |
0 | Identified threats | 5 |
As we can see in the table, despite Company A having automatically processed a volume of information forty times greater, the impact on the business (e.g. identification of opportunities and threats by analysts) has been seven times lower than in Company B.
It has been found that in the case of Company B analysts read nine times more information than in Company A. If we consider the total volume of information processed by the system, in relative terms it is about thirty times more. This difference cannot be just because analysts have more time for analysis, since both companies have involved people with similar roles and responsibilities, and the companies are of similar size. Nor significant differences have been detected in relation to the culture of internal collaboration.
We must bear in mind that the two implementations were not made at the same time. After startup, the automatic filtering process can evolve with the analysts' feedback, becoming more efficient. Since Company A performed the implementation months before Company B, it is assumed that Company A should have a greater impact on the business, and yet this is not the case. Therefore, this fact reinforces the striking of the data.
The explanation lies in the fact that the design and number of information filters of Company A forces analysts to manually filter the information, spending precious time on a task that does not add value. The company's eagerness to monitor its surroundings puts too much information in a small team of people, whose main duty is not reading the information, but selling or developing a product. Recall that in a Collaborative Intelligence environment analysts are people who devote a small part of their time to monitor the environment within the area in which they are experts.
The undesired effect of such a non-specific volume of information is that the analyst avoids participating in the collaborative analysis of information. Simply the staff involved is reluctant to "waste time" clearing news, and after a first period does not even open the alert bulletins. As an immediate effect, the company is blind in practice in the areas of knowledge assigned to these people, in addition to significantly reducing the profitability of the investment in technology.
Whilst Company B is considering the extension of the Intelligence Function to other areas of the company, the team of Company A will have problems in trying the rest of the organization to follow its example.
In conclusion, we must adapt the intelligence objectives to the size of the team involved. The info-obesity will generate difficulties in the exploitation of the Intelligence Automation. Choking the team with information can lead to a paralysis of the Intelligence Function, or the C-level could decide to cancel the Function in the next reduction of expenses since it is not adding enough and visible value. Therefore, it is the responsibility of the Intelligence Leaders to implement a good design, appropriate to the needs and size of the available resources. If the company has a greater ambition, we must involve a greater number of people participating part-time in the Collaborative Intelligence Function.
(Gargantua & Pantagruel is a series of novels by François Rabelais, written in the 16th century.)
Antara undertakes that the published content is created by its own team, customers or partners. Antara never outsources content generation. The opinions of the authors reflect their own views and not those of the company.