Obvious Disclaimers: AI is a rapidly evolving topic. Everything I am talking about here is in context of LLMs in 2025 and early 2026.

LLMs are the hands down the best way to learn about programming concepts and how to code - I am not even joking. It can act as you’re personal mentor - it can show pathways to learn concepts, it can clarify all you’re silly and insightful doubts, and you can even provide code snippets and ask questions about it. You may say LLMs confidently hallucinate sometimes - but probing whether what it says is true is also part of the learning process.

I learned Haskell - a pure functional, mathematical (and very beautiful) language thanks to Grok. Although I’ve been programming for many years now, I have no functional background. I’ve never even heard about these mathematical concepts (Type theory, Category theory, Functor, Monad, …) which are the core of Haskell but now I am able to understand them in-depth and write elegant Haskell code of my own1!

Knowledge has never been this accessible before! In my opinion, LLMs as of 2025 and early 2026 are tools that enhance developers by providing them with in-depth knowledge, helping them understand, and automate some trivial tasks.

Yes, “Vibe coding” is fine for trivial stuff, prototyping, writing personal automations, or for things where efficiency doesn’t matter as long as it works and gets done quickly. But for serious stuff? Hell nah! We are way too early for that!

Me personally, I gave “Vibe coding” a fair chance - the best I can because I can’t be fully blind because I already know how to code. Trivial tasks aside, I feel like it’s faster for me to implement it myself rather than doing it Agentic - waiting for agent, understanding it’s half-baked code and fixing the mess to meet my standards. Even thought I gave clear documentation of the project and prompt single detail very clearly on how I want it to be done, even letting it ask me questions iteratively before letting it running wild - it still makes alot of mistakes! Agents can “feel” like it can get you to start to finish really quickly, but people don’t see time and cost spent on fixing the slop it generates and things it breaks.

I am not gonna make bold claims that LLMs can never write quality software. But rather, we have some fundamental problems with current LLMs. The first obvious problem is context size. As the logic gets bigger and bigger LLM starts to take decisions that are really good at microcosm but really bad at macrocosm. This results in a lot of unnecessary computation or poor implementation overall.

The second problem is cost. People don’t realize that LLMs are being sold at the cheapest price now and tons of investment money is thrown into it to make it affordable. If you think I’m lying, go and download a small model using Ollama and see how much resource it needs and how miserably slow it runs on our current hardware (especially if you don’t have a dedicated GPU) - and you telling me ChatGPT offers state-of-the-art models that responds instantly to millions of users for free?

The whole situation is full of mess and noise now. I don’t blame people for throwing labels at skeptics calling them “too pessimistic” because that’s how intense the marketing and the propaganda for AI is. The whole scene is very chaotic and unclear for society as whole. But after the dust settles, people will soon realize the fact that we are very early and LLMs and as of now they are tools for developers - not replacements.

-Siddeshwar

Footnotes:

  1. I wrote a simple markdown to HTML converter called Handoc just few days into Haskell.