LLM Thoughts and Experimentations
For the past ~4 months, I have been using large language models (LLMs) for different types of tasks. The industry kept marching at its pace, with people being made redundant with LLMs cited as the reason. My personal experience tells me that is not the real reason.
In this post, I want to share loose thoughts on many points the current state of AI touches. This isn’t a structured segment, just some thoughts I thought worth sharing.
For those interested, I have been using Claude Code and Gemini Pro. I have tried others, like the infamous ChatGPT, and grok, but the first two proved to be far superior in the tasks I gave them.
The results
For quite some time I held back from using these tools. ChatGPT was becoming popular, Cursor just came out, but both looked like really poor smart auto-complete engines. With ChatGPT, I tried a few times to do some coding tasks, but I always ended up rewriting most of it on my own. In subjects where I knew what I wanted but didn’t know how, it worked. But I could not tell if it was a good implementation or not. I only used it in personal projects, by the way. Professionally, I always preferred to understand what is going on and I never settle for anything I don’t really understand - even now that I use LLMs on the job daily.
In short, LLMs were pretty bad in the beginning for most tasks I needed. Perhaps for some really specific tasks it was okay - like “write me a CSV parser for these headers”, but I already could do the same myself at a fairy decent speed. So, frankly, it wasn’t worth it for me. Cursor and Copilot are two approaches I never really liked, to be honest, as those feel invasive, unnecessary and not really providing that many productivity gains.
To this day, Cursor kinda sucks, in my opinion. It’s mostly a autocomplete engine but an annoying one, constantly suggesting things you don’t want, interrupting your thought process and often suggesting the wrong things. They have now added chat and thankfully you can configure agents in there, which changes a bit things. But the interaction isn’t really great. Claude Code however… A totally different breed of these tools, a paradigm shift.
I ended up using Claude Code over everything else for two main reasons:
- Simple, yet great terminal design. I get to keep my tools without having to install third-party GUIs and what-not. Since I use vim for most of my development needs, it’s a really great integration.
- Claude Code provides you the ability to change the train of thought halfway. You prompt something, when you see it’s going bad, hallucating, you understand your prompt isn’t great and thus it’s yielding poor results, you stop it, fix it by prompting again and you can repeat this until you are satisfied.
On the flipside, my only dislike is that it always tries automating long tasks which is not always desired. Perhaps something users can include in the prompt not to do, but I confess I almost always forget.
To summarize, LLMs can become great tools, although you need to know what you are doing. It still hallucinates, it still comes up with the wrong conventions, or the wrong implementation, and the only way to really maximize the tool is if you understand what you need, what it is doing, what it is implementing and the impact that code will have. Otherwise, you are totally clueless of what’s happening, you’ll be happy as long as the end result works, to then eventually have to spend days debugging code it wrote that you have zero clue what it is doing.
My workflow is typically this:
- Simplify the problem I need it to resolve. Provide context and examples of what I want as much as possible.
- Ensure it will plan first before making changes. Adjust the plan as needed.
- Implement things one by one. I rarely use “auto-accept” - I really don’t like this option. When I do use it, I stay there watching what’s happening, ready to cancel if it goes haywire.
You can and should include a CLAUDE.md
file in your repositories too, and include things like the standards to follow, the conventions to look after, the steps to make given a specific task a so much more.
It’s like a personal code monkey that can write code way faster than me (even though I can easily average 100+ words per minute :P) that I can guide through tasks and challenges. But I also keep the practice of reviewing everything it writes, making sure I understand every line of code it wrote. The cool part is that you can ask it to review its own code and modify things as you see necessary.
But this workflow has the underlying assumption that I know what I am doing.
Recently, I’ve been working on a reverse engineering project and I’m not a good reverser (that’s why I joined it in the first place). I saw people online recommending Gemini for reversing tasks. I don’t know what makes it so good for reversing but I decided to give it a try - especially because it’s free and yields the best results of all LLMs out there (specifically for RE, not as great for programming). But I don’t know what I’m doing. I can’t tell when the LLM starts hallucinating - so I end up often trial & error. So I am basically trying the responses, see what works what does not. It still is a great companion, but it is not as enhancing as when you know what you are doing. The tl;dr of this first part is this:
LLMs are absolutely great, if you are a subject matter expert and you have a very precise objective for them. Exploratory work, or when you don’t know what you’re doing, they are dangerous, you will never actually learn the topic, nor you will understand if you’re messing up. Use wisely and don’t use LLMs as a search engine - what a terrible idea that is.
Impact on Education
The conclusion in the last section leads to this one: the impact on Education.
In the current day and age, I find many young people piggybacking on LLM tools to do their college and school assignments. I have mixed feelings about this, but for the most part I feel younger generations are getting used to cutting corners with LLMs, and they don’t really care if they understand the topic. Maybe it is a generational thing, maybe we are passing the wrong message to them? As their student life goes on, this crutch isn’t really helping them improve and gain needed skills for the workforce, but instead they become less and less knowledgeable in their areas. This is concerning to me, to think the next batch of junior developers or security people will not have the foundational skills required to grow.
On the flip side, I believe schools and colleges should really use and teach kids how to leverage technology. But I grew up in the 2000s when the internet came around and I know how schools adapted (Terribly). It took them forever, they tried forbidding computers/internet unless on specific assignments and the end result was that I was never taught how to use the internet, it’s search engines and technology properly - except weird Word documents, or excel basics. Nowadays it is getting better, but once they found an equilibrium that works, they’re challenged again with a totally different tool - LLMs. Is the answer forbidding these tools? Or is the path something like khanmigo - by the way, this tool is amazing.
Holistically thinking about this, what they do in the Aviation industry is probably the best approach. When people get their pilot licenses, they usually start with small, manual aircrafts. They learn theory, get experience, and progressively include “big-boy plane” technology into their curriculum. When students want to proceed to commercial flights, they learn everything with all the technology and resources available. The difference is they spent hours on manual flight mode first. If a commercial flight has a technological problem and a pilot needs to fly or even land the plane without technology, they’ll be able to do it - even though they’re not experienced landing an immense plane manually, they know the process, how things work, they have the training. It definitely won’t be smooth but at least it should land in safety.
Younger generations are cutting corners, mostly “smartassing” their way through, and they’re skipping basics. If one day they’re challenged without LLMs, they need to solve a problem that LLMs caused or something else, they will be missing the “manual experience” pilots have. We can apply this to other industries too: Doctors who aren’t used to searching information online and rely on LLMs won’t know the best sources of information; Lawyers won’t know law conventions and reasoning; Engineers won’t know foundations and basics.
Education should have a way to include technology in their curriculums, teach and build foundational skills on how to use these tools, the internet and more to enhance their output. Forbidding will not work, and a matter of fact make it worse, but we should be concerned with the lack of foundational skills if students keep using the crutch to pass through their school/college work.
AI Doomsday
One really annoying thing at the time of writing, or rather since these tools started coming out, was the constant doomsday speech from all sources.
At first it was because AI will replace all jobs. Then there was a shift to AI bubble is going to burst.
As with any other technological or industry revolution, there’s been a bubble burst. The dotcom bubble is probably one of the most notable ones. A lot of companies shifted to internet business, nobody knew where the industry was heading, and eventually things settled with lots of businesses failing due to lack of value proposition.
The same is happening now (2025). LLMs are starting to settle as a technology, people are understanding where they’re useful and where they’re not. Pricing models of some providers are changing too, lots of people still blindly advocate for these tools and dump large sums of money into them, we see massive layoffs around the globe in the name of “replacing devs with LLMs”… I personally think there are two reasons for this though, and it’s not about LLMs:
- Companies are financially adjusting for years of inefficiencies that led them to need to improve their cost structure - laying off people is one option. Additionally, it’s unclear how companies recovered from the COVID era and some have been struggling since, laying off people every now and then to readjust.
- A crazy number of managers and C-levels are advocating for this replacement. I see two sides of the spectrum:
- Most people seem to forget that those advocating for LLMs likely have an interest in that. They work for or lead companies running multi-million dollar investments in AI. Of course they deeply believe all humans will be replaced. Of course they think you should use it.
- There are an awful lot of people simply regurgitating speech and opinions from those high-profile people. They don’t care if it’s right, wrong, or if that’s the solution. They just copy whatever Silicon Valley proposes. We’ve seen this in many topics before. When COVID came, tech companies were advocating for remote work; so everybody offered remote work. Then they realised they were dumping large cash amounts into empty offices - so they started pulling back into the offices; and the herd followed.
At the current state of the technology, I don’t think it’s really not going to replace anyone. At most, it will enhance individuals - which is crazy to think. One thing I reflect on most is the speed humans operate in the 21st century. The technological advancements since the 90s have probably been exponential. We all do so much, we all achieve so much (yet it’s never enough :D), yet the world keeps pressuring for more. With LLMs, work that once took weeks, if not months, now gets done in days, or hours (keep the first chapter in mind though). That’s amazing - and the downside isn’t that I’m going to get replaced, but instead that I’m going to have to deliver much more work, at a way higher speed, than before.