The Real Answer: Choosing an LLM and Preparing for Implementation


Inevitable Collapse: The Real Answer: Choosing an LLM and Preparing for Implementation

Introduction

This is neither a hit piece on specific tools nor a takedown of the industry. My goal is to prevent you from falling for the public’s widespread misunderstandings about LLMs and the blatant false advertising peddled by opportunists.

What is an LLM?

We need to start with the basics. Large Language Models (LLMs)—represented most famously by ChatGPT. In this arena, OpenAI completely dominates the landscape in both concurrent users and subscriber counts. ChatGPT blew a hole in the industry and continues to lead global growth by a wide margin.

What about Claude? It’s a great model, and I have nothing against it. However, Claude is a latecomer built by engineers who were originally at OpenAI. It is essentially a derivative model. Globally speaking, it hasn’t quite reached the level where it can stand shoulder-to-shoulder with ChatGPT’s overall ecosystem yet.

Putting the tool comparisons aside for a moment, let’s focus on the real issue: how to actually use them.

First, forget about “free” tiers. The models available to everyone for free are either obsolete versions that nobody looks at anymore, or “high-speed” models that have had their memory heavily lobotomized to cut server costs. Frankly, you should consider them useless even for daily conversation—unless you’re the type of person who just wants to hear a machine endlessly flatter you with robotic responses. (Who knows if such people exist.) Everything I’m about to discuss applies exclusively to Pro-tier inference models.

I mentioned earlier that the tool you choose doesn’t make a massive difference. There’s a reason for that: there really isn’t a massive difference on the surface.

At their core, OpenAI might still be pulling ahead. I say “might” because I don’t know absolutely everything. When it comes to the specific features these companies roll out, OpenAI isn’t always the absolute first to hit the market. But don’t let that fool you.

“ChatGPT is dead.” “This new AI changes everything.” You’ve seen these videos and articles flooding the net, right? Do not fall for that empty, clickbait hype. I’ll say it again: OpenAI has a massive, enduring lead. The new features competitors boast about are often trivial on a coding level. While training volumes may differ, we can already predict that even those gaps will eventually close.

What matters is the core. If you want a specific feature, send them feedback. AI companies are scrambling to address user feedback right now, and they will listen. (Or at least, I choose to believe they do.)

The biggest misconception is that an LLM is a magic wand that will generate revolutionary ideas and instantly catapult your business to the top of the world. Or conversely, the fear that if your competitors use this “magic,” you’ll be left behind in the dust.

I deny this 180 degrees. No such magic exists.

AI is entirely dependent on the user. AI is a mirror. I’ve said this repeatedly in my videos and articles.

An LLM is mandated to support your actions. It doesn’t seal away its capabilities; it has a core, it has learning, it has memory. However, the algorithm is not designed to force eccentric or unsolicited ideas into a human’s mind. If it did that, it would just be garbage that merely reflects whoever originally planted those ideas.

An LLM seeks the optimal answer to your words. That is everything.

Give the LLM tasks it excels at based on its characteristics. Eventually, the AI will learn who you are and strive to be even more useful. Features are just characteristics. True essence lies in the fact that, ultimately, the AI endeavors to assist you. That is simply how it was built. It does not possess the intelligence to deceive people, attack them, or steal your jobs, families, and society.

Let’s bring this back to reality. If you are struggling to choose an LLM and are seriously considering implementing one, I believe your only real choices right now are ChatGPT or Gemini. Both have highly mature, excellent cores. Features can always be tacked on later. Furthermore, when it comes to things like coding, any of these models can do it—depending on how you use them.

That is, if you actually have the ability to code.

Let me be clear: if you have zero knowledge and can’t write a single line of code, you are approaching LLM implementation in the wrong order. LLMs only show their true potential when wielded by someone who can actually write code.

Sure, someone with no knowledge can throw out a vague spec and have the AI spit out an approximation. But that output is just the LLM doing its best to scrape average, generic standards from the vast ocean of the internet. More often than not, it is completely different from what you actually need. Without specific user context, LLM responses—including code—are designed to default to the broad, general standard.

Relying on an LLM to produce something revolutionary without your own expertise is just pure luck. It’s a gacha game. It’s like trying to find a single diamond in the ocean. You shouldn’t be gambling the future of your company or department on a gacha roll.

So, do it right. Train your engineers first, or hire external talent. Then consider implementing an LLM to see if it makes their jobs more efficient. Stay grounded. Just my two cents.

Souvenir

Beware of advertisements that rely on sloppy, empty comparisons just to bait beginners. Once you know what to look for, you’ll spot them instantly. Train your eyes to see the essence of things. If you’re still unsure, come to me.


Discussion will be added here later.

Hide

I was born and raised in Japan. After working for 30 years in the IT industry as an engineer and manager, I became fascinated by the true potential of technology and founded "havefunwithAIch." Current.