In my work, I see tons of AI products at various stages: from ideas and prototypes to working startups and large enterprise solutions. They all fall into two types:
- Just a wrapper around an LLM
- A balance of deterministic algorithms and GenAI components
You wouldn’t believe how often people try to solve tasks through something like the OpenAI API that could be handled with a simple regex, a classic algorithm, or a compact ML model like spaCy.
What’s the Key Difference?
Approach 1: Just an LLM Wrapper
- Request → mega-prompt → ✨ (magic) → Result
- Almost zero uniqueness (this product can be replicated in a couple of days)
- Expensive (lots of tokens, especially when working with files — last week I saw someone feeding entire HTML with styles and base64 images into the completion API just to extract meta tags)
- Slow and unstable (responses are long, unpredictable, and hard to control)
Approach 2: Balance of Specialized Tools and AI
- Do as much as possible with regular, fast, cheap, and reliable code
- Use generative models where you actually need flexibility, creativity, or natural language understanding
- Break everything into maximally testable and narrowly focused blocks/tools/subsystems
- Prefer specialized classical ML models where possible
- Result: cheap, fast, unique, and genuinely hard to replicate
The Takeaway
Don’t drag LLMs everywhere! Always check whether the task can be solved with regular tools — simpler, faster, and cheaper. Leave as little room as possible for AI, but the magic will emerge in small but critically important areas where you use AI wisely.