Inteligencia Competitiva y Vigilancia Tecnológica para la Estrategia Empresarial

AI in Competitive Intelligence: The Year We Innovated Dangerously

Written by Miguel Borràs | Jan 28, 2026 1:12:11 PM

We’ve been wading through puddles with generative AI innovation for many months — years, really. In my view we’re suffering from the law of the instrument: you’ve been handed a hammer, and suddenly everything looks like a nail. Except some of those “nails” are, in fact, screws.

A few preliminaries

Many a true word is spoken in jest — and experience counts for more than theories. Back in the Neolithic ’80s I was trying to train speech recognition for dictation (on 16-bit processors), I dabbled in systolic processors, and in the ’90s I led my first industrial AI project for the design of chemical compounds. Last year I retrained in the new wave of AI technology — high time, frankly.

With impostor syndrome patched up, let’s get to the analysis.

First, let’s be clear: not all AI is generative. Beyond large language models (LLMs) such as ChatGPT, there’s a whole range of alternative technologies and methods that let a machine learn and help us with tasks — whether that’s classifying customers or detecting cancer.

But let’s focus on generative AI and LLMs, because they’re the ones turning the path into a bog.

LLMs are “confidently wrong stochastic parrots”. Meaning:

  • Parrots, because they only repeat what they’ve read at some point.
  • Stochastic, because their output isn’t consistent over time. They don’t answer the same way. In truth, they’re just calculating the probability that one word will follow another. Probability means indeterminacy.
  • Confidently wrong, because they’re designed to answer — and to justify the answer — not to tell you the truth.

Thanks to increased compute capacity (something we owe, at least in part, to gamers and NVIDIA chips), researchers could train on more data, and capabilities emerged that they hadn’t anticipated. That’s where the success of LLMs lies. It’s as if, by putting enough neurons together, consciousness emerges (for those who don’t believe in the human soul). These emergent capabilities, which have huge advantages in language work, have serious drawbacks elsewhere. Back to the hammer problem.

For example, an Anthropic developer recently discovered how genAI compares two numbers: it represents each number as a helix and twists one against the other to compare them. Is that really the most efficient and exact way to compare two numbers? Of course not.

Generative AI in the Competitive Intelligence function

We’ve already said it: language models are designed to respond, not to be correct. If there’s information they don’t have, they tend to make it up. That’s what we call “hallucinations”. Although there are simple and complex techniques to minimise the issue, it can’t be eliminated completely because it’s rooted in the fundamentals of the technology.

And that’s a problem when we apply an LLM to support business decisions — as in the intelligence function.

I recommend using generative AI, for instance, to draft the State of the Art (SOTA) report for an innovation project. Tools like Gemini or Perplexity can be a real help in facing the blank page. Of course, knowing what we actually want to know is key. The questions say more than the answers. And the results depend on them.

ChatGPT can also help us develop our Intelligence Directive: the company’s guide that sets out what each person needs to know to do their job well. Once the objectives are defined, we can break each one down into hypotheses to monitor the market.

But beyond these preliminary steps, we need to consider the potential role of genAI in each of the core tasks of competitive intelligence:

  1. Description
  2. Diagnosis
  3. Prediction
  4. Prescription

Let’s look at the impact on each.

The descriptive task in Competitive Intelligence

LLMs are excellent at descriptive intelligence. You provide the information, and they can structure and summarise it. NotebookLM, for example, is a practical tool for this task.

Not much more to add.

GenAI in the diagnostic task of Competitive Intelligence

With diagnosis, we start to run into trouble. The LLM has to connect concepts and draw conclusions — and here we run into hallucinations. In an exercise I did with students, one of the best-known LLMs assured us that Napoleon had discovered America. Remember: confidently wrong.

We’ve all heard that LLMs come with so-called guardrails to prevent these situations, and there are some measures we can apply as users — just as they also claim to have addressed biases around race, sex, and security. But at a session with lecturers and students we challenged ourselves to bypass the security guardrails, and with a single prompt I managed to get past the limits even in one of the more capable LLMs — a demonstration that guardrails are not a guarantee; in critical environments you have to assume failure and design controls.

What we can do is use genAI for “augmented analysis”: a person, or a team, uses AI to support their work. But not to have it do the work for them.

Look — I’ve seen listed companies outsource market monitoring to students doing their final degree projects, rotating every three or four months. Giving an intern responsibility for diagnosis in the competitive or technological intelligence function, while expecting generative AI to make up for their sector knowledge and understanding of company strategy, is worse than driving the business with your eyes and ears covered.

Prediction with generative AI in the Competitive Intelligence function

In the predictive phase, what we want is to anticipate what will happen in the short or medium term, based on the data available. The problem is that predictions are built on… the past! AI can uncover hidden patterns, but it lacks human inferential capacity (for now).

An LLM can interpolate more or less well, but it’s much harder for it to extrapolate and outperform a human. Once again, it’s the expert’s intuition and know-how that must guide the extrapolation, using generative AI as support.

What’s more, in recent months we’ve seen LLM services add a layer where the AI itself decides which model to use to answer. But costs spiral out of control — spending 15 USD for every 1 USD they charge per query — and in my experience they tend to “try the laziest model first”. In fact, I spent a weekend processing data using this approach, and because I already knew roughly what the results should look like, when I didn’t get what I expected I started twisting the AI’s arm. Until it admitted it was using a minimal model — and not only was it making the result up, it even said it was “doing a manual processing of the data”. Manual?! Utterly surreal.

You need to be an expert in a domain to properly audit a genAI answer. An engineer told me a few weeks ago that when you’re an expert in a field you realise the AI “is rubbish” (sic). And that AI’s success has shown her there aren’t that many true experts in any given field. Is that just subjective? Recently Scale AI published a study that tested leading LLMs on real remote freelance tasks — from product design to writing scientific articles or programming videogames. The results are an illustrative reminder: only 2.1% of the jobs were acceptable to a reasonable client. Not even 1% of the jobs were acceptable to a domain professional.

Prescribing actions with generative AI in the Competitive Intelligence function

In the prescriptive phase, after diagnosis and scenario prediction, I prescribe what actions the company should take, based on its strategy.

I remember a conversation with a Marketing Director at a company outside the EU who asked whether the AI in a competitive intelligence system could propose the company’s strategic marketing plan. I told him to be careful what he wished for — because the day it could, he’d be out of a job. With a smile. He’s no longer with the company, but I doubt it was because of that, because no company in its right mind would leave strategic marketing in the hands of genAI.

What if we let AI learn from us?

A little over a week ago an innovation manager asked me why we don’t let the system learn from colleagues’ preferences in their monitoring work.

The answer is that machine learning in market monitoring is dangerous, because the world evolves and generates disruptions. By narrowing the focus and responding only to what’s already known, we will minimise or discard early signals of those disruptions. We’ll gradually enlarge our blind spots — as if we were making the car’s mirrors and windows smaller and smaller.

In any case, that adaptive learning process shouldn’t be based on LLM technology.

And finally, the forgotten counterintelligence in corporate AI use

The next step after monitoring the market is to ask how to practise counterintelligence: how to stop others monitoring us — or at least how to practise privacy. AI is opening doors that lead straight into puddles. Deep ones, at that…

Start with the basics. LLM chats can be shared, but be careful with public links: the information can end up indexed by Google. And we don’t want our project showing up as an answer to anyone’s Google query.

We’ve now opened Pandora’s box with AI browsers (like OpenAI’s Atlas, or Perplexity’s Comet). Until now, you were the one proactively uploading data into an application (I say “consciously”, although sometimes it’s unconsciousness). With the new browsers, they’re continually seeing what you see (and what you don’t see, but happens to be somewhere inside an open application). For example, if you have your CRM open, the browser sees and learns who your customer is, and how much they pay you for which product. And you no longer control that information. What’s more, it’s now public that they’re not recommended in business environments due to the risk of prompt injection. In a company we have a duty of care: we need an explicit policy and we should use a sandbox — a constrained test environment.

Even more recent: these days we’re in the Clawdbot fever — an open-source solution you install on your computer, connect to an LLM, and ask to carry out tasks. Installing this kind of technology casually not only puts your resources at risk; it also opens the door to a privacy disaster. Let’s be careful and use the sandbox.

Epilogue

We’ve seen some of the biggest puddles on the road to productivity amplification with generative AI. All that’s left is to answer the recurring question: “Do you use AI in your company?”

Answer: “Yes. And we also use electricity and drinking water.”

It’s a commodity.

-----

"The Year of Living Dangerously" (1982) is a film directed by Peter Weir.