Is there anyone like me out there who has yet to use AI? I think it's interesting but I have no strong feelings for it. Aside from a few images generated with early image generators I have not used anything else, not even chatgpt. AI search results I mostly scroll past are an exception.
I was an LLM skeptic for a long time. I still have a hard time trusting it to the same extent that most HN'ers appear to. (I would never use an LLM as a substitute for my own "voice" when writing, or put any AI-generated code into production.) But I think I have reached a middle ground: I basically use it as a first approximation when I am exploring something entirely new to me. For example, if I'm learning a programming language, I might ask it for ways to unpack an array into separate variables. Or if I'm reading an ingredients list, I'll ask what psyllium husk is. Basically anything that's moderately easy to verify if I get any suspicion that the LLM is hallucinating again.
These are things that I _used_ to simply ask a search engine, before Google results became 99% SEO-optimized blogspam and therefore useless for actual knowledge-seeking.
I think an important part of overcoming AI skepticism is to understand (at a very high level) how it all works so that you understand its limitations and know when you can and cannot trust it.
I haven't used any LMMs and I'm not missing anything. At some point in the last couple of years I have entered a couple prompts in chatbotgpt using my friend's account just to see it with my own eyes (funny enough asking slop, but before that was the word, and hallucinations). I also ran a prompt through llama.cpp just to see if it would work. It did, great, I don't really care.
I've done a few experiments on some project I've been putting of, one involves parsing some email. There's certainly some interesting use cases, but I've yet to figure out how to actually go about deploying a solution.
It does seem, to me, that the benefits are there, but not to the extend that the AI companies would have me believe.
I consider myself principally a software engineer, and like you I have avoided using AI aside from a few "haha look at that" images produced with DALL-E. Very rarely the forced AI content at the top of a search results page has helped me refine the term or concept I was trying to look up.
I like doing the things that AI is supposedly good at. I like learning, I like understanding what I am doing, I like the satisfaction of finally getting to a solution after banging my head against the wall for hours. I like the occasional sensation of being hopelessly lost and finding my way to the light. I like writing text and code. I'm not terribly bored by boilerplate. I like going down rabbit holes and experiencing happy accidents. I can't in good conscience sign my name to something that I did not create and do not understand deeply enough to explain to anybody who asks. I like doing the hard thing.
The immediate response I'm sure that I will get is some variant of "well everybody else is using it as a force multiplier and the way you do it is making the job take too long." And maybe that's true. Maybe I don't care. I am a person who takes pride in my craft. I enjoy the act of making things. Some people don't, I guess.
The day may come where I am viewed as a dinosaur, where the way I work becomes fundamentally incompatible with the way the industry works. And if that day comes, bluntly, fuck the industry. I'll go fix air conditioners instead.