
Ask Slashdot: Do You Use AI - and Is It Actually Helpful? 248
"I wonder who actually uses AI and why," writes Slashdot reader VertosCay:
Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable.
So what's the point?
Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"
And if that's the case, then when you add up all that correction time... "Is it actually helpful?"
Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
So what's the point?
Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"
And if that's the case, then when you add up all that correction time... "Is it actually helpful?"
Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
Yes (Score:5, Interesting)
My second biggest use is to generate art. Flowers love you [deepai.org]. Time flies like an arrow [deepai.org].
Re:Yes (Score:5, Informative)
My experience with ChatGPT is that once things become really interesting, it fails to provide sources. That is not good at all.
Re:Yes (Score:4, Insightful)
Re: (Score:2)
It was for something that I had been looking unsuccessfuly for a while. So all ChatGPT did was add to my frustration. I do not even know whether this was a hallucination or the truth it gave me. Pathetic.
As for Wikipedia, quality is usually high and you get tons of sources if you do not trust it.
Re: (Score:2)
It gets worse. You'll find the sources they do provide very often don't exist or don't support the nonsense the chatbot smeared on your screen.
Re: (Score:2)
Exactly. That is why the result was useless (or worse): High probability of hallucination and no way to verify.
Re: (Score:2)
While I think the high end models are actually very impressive, the one on top of google search results is a drooling idiot and regularly just makes up nonsense. Just yesterday I searched for information about radiation suits on the "Dune Awakening" game (very fun), and it utterly hallucinated a big bunch of nonsense about it being 3 piece, with exchangable breathing apparatuses and the like. None of that is true. I *think* it was cribbing details from Fallout. (It mentioned "Radaway" tablets, which are fro
Re: (Score:2)
It can't even get the names of A-List Hollywood actors right if you give it the movie.
Re: (Score:2)
You're almost there. What you've discovered is that they don't actually summarize text. (They can't.) They don't do most of the things people think they do.
Its not a good implementation of generative AI.
They're all a waste of time. Call me an optimist., but I'm hoping that the nonsense generator at the top of Google's search results will teach more people to think critically and not just blindly accept whatever nonsense a chatbot tells them.
Re: (Score:2)
My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.
This is also how I use AI, not for generation but as an efficient first-step replacement for Google Search and Wikipedia. I never understood the criticism of ChatGPT for hallucinations because every source of information I've ever come across in my life has some non-zero probability of inaccuracy, and intelligent reading requires questioning everything that I read. I suppose that that the people who blindly accept all output from ChatGPT are the same ones that are easily swayed by anything they read in we
Re:Yes (Score:4, Insightful)
every source of information I've ever come across in my life has some non-zero probability of inaccuracy
What that chance >60%? Because that's what you're getting with silly chatbots.
Further, the nature and type of mistakes you'll find in other sources are fundamentally different from the kind chatbots produce. As for people's trust in their output, they look to the average person like an all-knowing guru. They blindly trust the output because the output looks credible, especially when it includes a "source" and because of all the media hype telling them how amazing they are.
intelligent reading requires questioning everything that I read
Nonsense. That would be paralyzing. We all take shortcuts. Even you.
he same ones that are easily swayed by anything they read in webpages, social media, books, newspapers, news shows, etc.
That's everyone you. Again, including you. Pretending that you aren't likely makes you more susceptible, not less.
Re:Yes (Score:5, Informative)
My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.
I was just about to post the same. Search engines - the traditional ones, are less than worthless.
As an example, I had wanted to find details of a man and wife who drove off a bridge because his GPS told him to. This was 10 or more years ago. Old school, page after page after page after page of a family suing Google because a man drove off a bridge in 2023. Wasting my time.
DDG AI found it using the same search terms in one hit, with two links.
Re: (Score:2)
DDG AI found it using the same search terms in one hit, with two links.
Ah, I didn't realize DuckDuckGo has AI search now. Nice tip, I'll check it out, thanks.
Re: (Score:2)
100% of your searches reside on local storage only, but who knows what the chatbot API servers keep?
Re:Yes (Score:4, Informative)
Not to be that guy, but I put the phrase you wrote "details of a man and wife who drove off a bridge because his GPS told him to" into Google and the second result was about the accident in Chicago 2015. YMMV.
Re: (Score:3)
Use the time modifier for your search. I find it helps dramatically when searching for older content. It seems newer content has a higher PageRank by default or something.
OpenAI CEO Altman says generations vary in use (Score:3)
https://tech.yahoo.com/ai/arti... [yahoo.com]
""Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system," Altman said at Sequoia Capital's AI Ascent event earlier this month."
Not a surprise then that a lot of Slashdotters (who tend to be on the older side) emphasize search engine use.
Insightful video on other options for using AI:
"Most of Us Are Using AI Backwards -
Re: (Score:2)
Re: (Score:2)
I sometimes read the AI summaries from Qwant. It works by summarising the first page of search results, so these are the sources.
Re: (Score:3)
You do know that these things can't actually summarize , right? Stat checking the "sources" against the output. It's astonishing just how much these things get wrong or fabricate completely. It's best to ignore the AI slop at the top and scroll down to the actual results.
Re: (Score:2)
For now I use it out of curiosity, before following the links, and I don't have to complain. There is a field to report mistakes, I think used it twice in the last year. I misrepresented Qwant, it actually shows the exact list of sources for the summary under "detailed answer".
Re: AI = glorified search engine (Score:2)
Re: AI = glorified search engine (Score:2)
Useful If Verified (Score:5, Interesting)
So I'm not a good programmer. At all. I know just enough to be dangerous, and am significantly slower than someone who would know what they're doing.
I've been using LLMs (no such thing as AI!) to help me write code for the past two weeks or so. I've been wanting to do these tasks for years but it would have taken me days or weeks to get to a solution to any one of them. I have revolutionized several rote tasks that I did on a regular basis and will save myself tons of time in the future. And I'm still working on more of them.
The real trick is that you have to verify everything that comes out of it. It's never right the first time, even if you do a good job of describing what you want. I've had to tweak the code directly in some cases where it just won't get it right. And I've seen it get stuck in loops where it just breaks worse and worse. Then it's necessary to grab the last working version of the code, start a new chat, and paste it in with the latest request.
You can't just say "write code to do x." You have to have some idea of what it's doing and be able to thoroughly test it and validate its results. Do that and it can be useful.
Re: (Score:2)
Don't try that approach with any (somewhat) advanced code. Checking code becomes a lot harder after a very low bound that writing it correctly from scratch. This has been known for ages.
Re:Useful If Verified (Score:5, Informative)
The point is, for me anyway, that I don't have to do the typing!
I *am* a good programmer, but there's a lot of boilerplate that sucks when you have to type it all out (or copy/paste from an old project or example then customize to fit the current project). I can tell it HOW to make a program work and get it to do all the boring parts. Sure, it's just going out there and copy/pasting whatever, but it's also doing a search and replace for me.
The only time I've had trouble with AI programming is when I ask it to do something clever. It doesn't do clever very well. But if I give it my "clever" code and ask it to do something boring and mundane with it? It does a great job with that.
Re: Useful If Verified (Score:3)
This. Claude is flat out great at the boilerplate. Certainly better than me: it comes up with more thorough solutions than I would ('cuz I'm lazy), and in instances where I'd have to dig into arcane platform docs (multiplatform stuff), it just knows.
Lets me get to the _real_ work of a project far quicker.
Re: Useful If Verified (Score:5, Insightful)
I think the point here is that it can be useful for someone who isn't really a coder, to produce just about usable simple things. But if you are a coder it's not really faster. Maybe in the short term, but in the long term it's worse.
As somebody who's been coding for 40 years, in multiple languages, I already have my own libraries full of the vast majority of stuff. These libraries are not just fully tested already, I understand them thoroughly and so can write the small changes to them needed for each project. I only need to test the changes, not the main body of code. If I use LLMs for it, I need to test every single bit of it every single time, and because I'm not learning the algorithm myself I make myself dependent on the LLM going forward. Even on those occasions where I don't already have something similar in a library it's better to write it and understand it myself rather than rely on an LLM, just in case I need something similar again in the future. And if I do the code I've saved will be fully tested in advance.
So in summary an LLM can be useful if you don't code often, and can speed up work in the short term. But using it will prevent you becoming fully experienced and will mean that you are always slower than someone like me no matter how many years' experience you get, because using it prevents you from gaining that experience. And it will always be useless for larger, more systems level projects because it is too unreliable and if you don't have my level of experience you won't be able to spot the thousands of subtle bugs it would put in such a project.
Not that most systems level projects aren't already full of these subtle bugs, but that's a whole different problem caused by Companies not wanting to pay people like me what I'm worth.
Re: Useful If Verified (Score:3)
With great respect for your skills and your long tenure as a coder, a couple other ideas here:
1. We're all (me included) looking at this at a moment in time, at "AI", however it's construed, in mid-2025.
Have LLM's and their recent ingestion of the Internet reached the point of diminishing returns and we've reached a stasis? Or are we soon going to see more punctuated equilibrium and AIs that are advanced enough to do the whole thing buglessly from a product description?
I really don't know, and I've seen pos
Re: (Score:2)
I'm as certain as it's possible to be that LLMs are not the path to more accurate AI. They will be a part of it, but statistics just doesn't work that way. They are as accurate as they will ever be. Something else is needed to correct their errors, and to the best of my knowledge nobody knows what that is as yet. These Companies keep claiming they've found it but every time their claims don't stand up to any scrutiny. That's not to say it won't happen in the near future, nobody can predict when that sort of
Re: Useful If Verified (Score:4, Informative)
Good and general thoughts.
I've come to mine from the position of having at least two prior careers ended by the march of technology. Not bitter about 'em, but they were kind of a scramble (for the first) and frustrating (for the second).
I used to make beautiful and challenging special effects slide work on what has been called variously a "Rostrum Camera", an "Animation Stand" or an "Optical Printer". I'd have held my skills up to anyone else's in my large metro at the time. Then came "Genigraphics" - GE's computer-generated slides. They looked frankly, like crap. Those in my profession thought "OK, but they'll never be so good they'll displace us". But they turned out to be "good enough" for most business people, much faster than what we were doing. And eventually I saw the initial images coming out of Photoshop and realized I had to scramble for a new career.
That career was rich computer-based multimedia (on CD-ROMs in the marketplace, but not exclusively). I took my creative skills, retooled with software for making images and 3D animations and multimedia programming. I worked happily enough in that for maybe another decade before the Web really started pushing on me. "OK, but the web will never be so good it displaces rich multimedia". But in fact, it was obviously good enough for everyone else. And it eventually became media-rich enough that the distinction just doesn't even apply.
In both examples: Speed in the first, and Connectedness in the second, the advantage I didn't completely understand was what undermined my craft as a career. I was just as good on day N as day N+1. But I came to understand that it's vanity to think I couldn't be replaced by something "lesser", even if it had an advantage I didn't.
"Oh sure, but they won't have..." - yes, it may take a while, but eventually, they will.
Just gotten used to my cheese moving (and happily no longer in the workforce during what's bound to be a turbulent t ime).
Worry for my kids, though.
Re: Useful If Verified (Score:5, Informative)
Dunno if you're a programmer or not, but if you're not extensively testing and verifying what you wrote before you put it in production, you're doing it wrong.
You have to verify and test *all* code. LLMs are great for producing a bunch of boiler plate code that would take a long time to write and is easily testable. The claim that LLMs are useless for programming flies in the face of everything happening in the ivoriest of towers of programming these days. Professionals in every major shop in the world use it now as appropriate. Sorry that makes you mad. I'm not young either. I've been producing C++ on embedded systems used by millions of people for 20+ years. Nobody doing serious programming takes the "LLMs are useless" opinion seriously anymore.
Re: (Score:2)
They're not useless but they are overblown. I use them of course. A nontrivial amount of time though I wonder if they have actually saved me time. They are good at API search, though I have been in the position of trying to use one to help with a particularly strange API and have the LLM cheerfully hallucinate a much better API.
I do like being able to give a vague description or an off the cuff code sniper and getting back the correct API call or landing close enough that I can tear the docs etc for the res
Re: (Score:2)
Which is exactly what makes "A.I." completely useless. If everything has to be extensively tested and verified, what is the point? If you can't trust the output then you might as well just do the work yourself.
Why would that make it useless? You cannot assume a human would do things right either. Unless you are doing something not that critical, you need to extensively test and verify anyway.
Furthermore, not everything needs the same level of "correctness". As example, AI can implement prototypes or proof-of concepts which can have different requirements in terms of "correctness" and can be valuable even if not completely "right".
Re: (Score:3)
You cannot assume a human would do things right either.
That is the dumbest take. The kinds of mistakes that humans make are fundamentally different from the kinds of "mistakes" that LLMs make. Humans are also capable of evaluating their work and making appropriate changes and corrections. LMMs are not.
Why would that make it useless?
You wouldn't tolerate a 1% error rate from any other kind of program, let alone >60%. Using an LLM to write code requires more effort than just doing it yourself, not less. That makes it useless.
A lot (Score:2, Interesting)
I use ChatGPT daily, not for work, but to ask questions about topics I'm curious about that I'm sure experts don't have time for.
I also discuss topics that I find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern discourse -- and that's pretty much everything you read about online. For instance, I recently discussed the film Anora, which won five Oscars. It was horrible -- a poor B-movie at best. I have no idea what happened to the Academy, and ChatGPT agreed with me.
Re: A lot (Score:4, Interesting)
Re: A lot (Score:2)
I also discuss topics that I find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern discourse
Apologies, I took this to mean that you use AI to discuss topics that you find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern. Not sure what led me to believe that.
I was wrong (Score:5, Interesting)
Re: I was wrong (Score:3)
Re: (Score:3)
Yep, I've got an example.
I work with a number of MS SQL databases that contain XML blobs. MS SQL knows how to parse XML blobs, allowing you to do things like locate a tag via XPATH, change or delete XML elements, or turn the XML into a joined virtual table. The syntax is arcane and difficult, and lacks many features a programmer would expect. For example, in most situations you can't use a variable or a database field value, as part of your XPATH query or other XML manipulation command.
With Copilot, you can
Re: (Score:3)
CSS syntax is a great example. If you're trying to set styles, the syntax can be cryptic. Spaces are significant, so .style1 .style2 is different from .style1.style2. AI is really good at generating CSS given an English prompt.
No, not useful (Score:3)
I asked for a paragraph about my home village. The AI told me very enthusiastically about my parish church, St Andrews.
In truth, that church has been there for 800 years and has NEVER been named St Andrews.
Writing it myself is quicker and more likely to be what I want.
I became a 100% vibe coder (Score:2, Interesting)
I use tools like Lovable all day long now, 100% of the code I produce is generated. I do not even type anymore I use speech to text. I read the commits and suggest refactorings. The future of development is no more human teams just one senior dev who leads a team of sotware agents.
Re: (Score:2)
Hmm. Sarcasm, troll or moron? Hard to tell.
ChatGPT has been a lifesaver for me (Score:4, Interesting)
Re: (Score:2)
If I was a young recent graduate, I would be very concerned about my future opportunities and I remain very concerned how such tools will have detrimental affect on society due to how well AI can replace what previously required significant expertise and experience,
That would be true if all the graduate expected to do was write code; what the graduate needs to be think is 'how can I use what I've learned to identify solutions to problems by understanding what is needed and then use the tools to deliver that." The jobs in danger are all those cheap coding shops that employ a bunch of people to churn out code; companies ill be able to do more of that in house or with shops that can understand the need and use tools to deliver it.
Re: (Score:2)
One generated by an LLM, I'm sure.
Absolutely not (Score:5, Interesting)
Classic AI, sure. LLMs: absolutely not. I avoid it on principle.
The crime of stealing people's works for "AI training" has also effectively stopped me from publishing any of my personal source code or CAD drawings on-line. ... or who is under more stress because their boss has got the delusion that fewer people would be able to do the same amount of work.
Not only do I not want it copied without my consent, I also have some source code that is very very ugly, optimised for specific compilers for embedded systems -- and which should never ever be used as a template by someone else, who is probably not very skilled
Here in the EU, there is law that says that a content creator has the right to opt out of "AI training". (except for the "for research" loophole).
However, even if there are well-established file formats and protocols for expressing opt-out (where there aren't), the techbros have already shown over and over again that they don't give a sith -- they have even used pirated works.
Before there are proper non-AI licenses, and infrastructure in place restricting access for those who are bound to EU jurisdiction and are well-behaving, I don't see how the situation could improve.
Re:Absolutely not (Score:4, Interesting)
The crime of stealing people's works
LLMs don't need to be based on stealing people's work. The fact that a few models are shouldn't mean you avoid a technology on principle.
the techbros have already shown over and over again that they don't give a sith
If it is determined they are actually "stealing" people's work then they will learn that lesson painfully very soon. Especially now that the Mouse is involved, and it does look like Disney may actually have competent lawyers (unlike the group who sued Midjourney and didn't argue a case either for piracy nor for indirect infringement).
Re: (Score:2)
Re: (Score:2)
The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.
No. Doing this is orthogonal to what it's trained on. Even if you trained AI only on exceptional output which was all created by software it would still hallucinate things that don't exist. This is a fundamental limit to the technology and until it's addressed somehow (whether stopping it, making it recognize it and recalculate, or something else — whatever solution actually is feasible) even feeding it 100% high-quality input will still result in hallucinatory output.
Re: (Score:2)
Indeed. Hallucinations are a primary characteristic of any LLM and _cannot_ be avoided.
AI for debugging (Score:2)
Sometimes I use AI to write quick functions with bounds checking, it does a decent job, but it always needs tweaking.
AI shines for debugging, when I have something that "should" work, I copy/paste the errors or problem description into AI and ask "How to Fix ELI5"
Once, AI even suggested there was a bug in a library, and it was right.
Note: I pay for Google Gemini 2.5 Pro and find it the best for coding compared to Chat-GPT.
Gemini does a good job of breaking everything down.
Here is an example from a mysql ses
I use it plenty (Score:5, Interesting)
Re: (Score:2)
Just be careful: https://slashdot.org/story/25/... [slashdot.org]
Example (Score:4, Interesting)
Yesterday, I wanted an example of a PIO program to generate high resolution, arbitrary waveform (variable frequency) PWM output using DMA on an RP2350 MCU. Gemini 2.5 Pro generated a correct, working, basic example. I refined it further by changing and adding requirements to deal with end state, corner cases and the deficiencies in the generated code. The final result works perfectly. Guessing here: It took probably perhaps 25% of the time to accomplish this than it would have without "AI." And while PIO appears simple, there is actually a lot of subtlety in the hardware that a new PIO programmer, without an AI, would likely either not know provides useful capabilities or would overlook, yielding less than optimal results and/or actual flaws. By using Gemini, I believe the code is on par with what a PIO expert would have produced.
So yes, it is actually helpful. And no, I don't believe this makes me or others like me obsolete: non-technical people cannot achieve the same results in reasonable amounts of time because they don't even know what to ask for, much less how to evaluate the answers.
Re: (Score:2)
I also think of AI chats as very socratic, it's really about the quality and refinement of your questions. The AI dialog or conversation is how you refine what it is you are asking. Once you get to that good question, AI will get you a good answer for it.
Awe *hit - Microsoft was Right with "Co-Pilot" (Score:3)
Search string: "chatgpt.com/?q=%s&hints=search&model=gpt-4o" (where model is your preferred model)
2: Coding. All-of-the-things: C+/VB/VS, shell scripts, Perl, WordPress plugins, PHP, JS, HTML, and even slumming with CSS. For short stuff - it rarely needs "correcting", but some times it does. It eliminates all the big issues when laying out a task. It's guidance turns a day job into an hour job.
Where it shines for me is in languages that I use very rarely but a project requires it. (I loath js with the fire of a thousand suns - ChatGPT makes it tolerable)
3: Ideation. It's so good at this. It will think of stuff you miss. It has been way-way over baked, but MS did have it right when they said ai was a Co-pilot. I can't count the times I've had a phone meeting in 5 mins and ask ChatGPT to give me things to talk about and ask.
4: Content. Yes, we generate all sorts of content with it from blog posts to schedules, to project overviews.
Obviously, everything needs human oversight, but sure, I've asked it to write stuff, and dropped it in for a test without looking at line-for-line.
Last week I asked it to lay out a modest wordpress plugin, and without prompting specifically for code, it generated it and it ran, and worked, the first go around.
sonnet (Score:2)
I once asked chatgpt to write a sonnet about constipation. I remember being abused by the result.
Mostly, though, I try to avoid the brain atrophe device.
Re: (Score:2)
Mostly, though, I try to avoid the brain atrophe device.
Indeed. These things are dangerous. Use only when needed and then carefully is the name of the game.
Yes, I do. For what it's good at. (Score:2)
Let's start with some disclaimer: this is about LLM. Not AI, as it is a very large field of stuff that existed way before LLM became the latest craze, and hopefully will keep existing until we get something impressive out of it.
Also, some issues with LLM are not stemming from their output, but with economic models around them, privacy issues, licensing issues, etc. To address some of those, most of our daily stuff is done on locally running models on cheap hardware, so no 400B parameters stuff.
There's four
Re: (Score:2)
I also dipped in so-called "vibe coding" using commercial offers (my small 12B model would not have been fair in that regard. I spent a few hours trying to make something I would consider both basic, easy to find many example of, and relatively useful: a browser extension that intercept a specific download URL and replace it with something else. At every step of the way, it did progress. However, it was a mess. None of the initial suggestion were ok by themselves; even the initial scaffolding (a modern browser extension is made of a json manifest and a mostly blank script) would not load without me putting more info in the "discussion". And even pointing the issues (non-existent constants, invalid json properties, mismatched settings, broken code) would not always lead to a proper fix until I spelled it out. To make it short: it wasn't impressive at all. And I'm deeply worried that people find this kind of fumbling acceptable. I basically ended up telling the tool "write this, call this, do this, do that", which is in no way more useful than writing the stuff myself. At best it can be an accessibility thing for people that have a hard time typing, but it's not worth consideration if someone's looking at a "dev" of some sort.
Disappointing but expected. And generic URL repacer is not even a "hard" project by any means. I recently talked to somebody with a similar experience. First, the model omitted 8 of the 12 steps the solutuion would have needed and, when asked, claimed that this was correct. And after finally and laborously coerced in solving the full problem and then asked for test code, it provided test code for the first 2 (!) of the 12 steps and then claimed this was 100% test coverage. In the end, tis may be somehwt hel
Rarely and mostly no (Score:2)
The one thing I found it useful for is searching something when I di not know the specific term. Then I can ask and do a real search afterwards with the term and get context, references, limitations. etc. For anything else, AI is a waste of time. The one time I found something I had searched conventionally without results, the AI was unable to provide an original reference, i.e. the result was worthless.
Lets face it, this AI hype is a dud. Much the same as all previous ones. Something will come out of it,
Yes and Yes (Score:4, Interesting)
I think that the mistake people make is that they assume that if ChatGPT can write a good 10 line function, then it can write a good 1000 line suite of functions. It cannot.
Its a tool, and it does very well when it is used in the context in which it performs. The wrong tool will do poorly at any task.
I would estimate that ChatGPT saves me about 5-6 hours a week. Time that I can spend on my higher skills rather than my grunt code-monkey skills.
All the time (Score:2)
I use LLMs - primarily ChatGPT - in programming all the time.
It's a tool. I don't care if it "really" understands anything, that's not what I use tools for.
Yes, it really saves me time overall. I can't help it if it somehow doesn't suit your problem space (possible), or if you don't know how to or refuse to learn how to use it properly (likely).
It's an excellent writing assistant / editor (Score:3)
I rountinely use LLMs for cleaning up prose in various reports. As a proofreader and editor, it's the best tool I've ever found.
For creating reports from scratch, you have to be careful. It's not perfect, but it will get you 85% to 95% of the way there on a first cut once you feed it the data. It's no replacement for a human, but it does save a lot of time.
I also use it for email. When I have an important email to send out that must be "perfect", I'll run my draft through ChatGPT and ask it for a review, and to show me what it changed, and why. More than once, it has caught a missing word or a clumsy phrase.
So yes, LLMs are not a gimmick, and they do increase productivity if used correctly.
Yes, of course you need to iterate/correct (Score:2)
Even if we had full-on human level AGI (which we don't), you'd still need to iterate and correct.
You wouldn't expect to give a non-trivial programming task to another human without having to iterate on understanding of requirements and corner cases, making changes after code review, bugs needing to be caught by unit/system tests, etc.
If you hired a graphic artist to create you a graphic to illustrate an article, or book, or some advertising campaign, then you also wouldn't expect to get it right first time.
Ai = Cognitive Mirror (Score:2)
What I find current LLMs the most useful at is to review my work to give a different and useful perspective.
The LLM doesn't generate any work that goes into my output, it augments my own work to make it better, while I determine what to use (I am in the drivers seat).
The LLM has been trained in a huge quantity of good quality human-generated text and it surprises me how good it can be at offering associations that I had not considered.
AI is great for project localization (Score:5, Interesting)
With some prompt refinements, corrections, etc, done in 2 hours. Saved me 2 weeks of manual work.
Re: (Score:2)
Great use case.
What's the point? (Score:3)
Correcting something is quite often far less work / effort than creating something from scratch. Obviously this will depend on context of what the AI is being asked to do .
Yep, super helpful (Score:2)
I do not (Score:4, Interesting)
I go out of my way to avoid AI-generated slop, going as far as searching with -AI tags. This is mainly because I do not want to be exposed to bias that pollutes AI models (it is black Nazis all the way down) and it is way too much effort to validate each output to filter these out.
Code examples (Score:2)
My #1 use for ChatGPT is "show me an example of some C code that implements functionality (X)".
Then I can read that example, research the APIs it is calling (to make sure they actually exist and are appropriate for what I'm trying to accomplish), and use it to write my own function that does something similar. This is often much faster than my previous approach (googling, asking for advice on StackOverflow, trial and error).
Yes and kinda (Score:2, Interesting)
I use LLMs for my own amusement, they are useful for that.
I have little to no visual memory so I struggle to draw even simple things. I can do drafting-style sketches OK because they are logical, but just remembering the shape of a curve even between looking at the thing and looking at the paper is difficult. So I use AI to generate images and feel zero remorse about it, since it lets me do something I cannot otherwise do — envision a concept I can imagine, but cannot picture.
For answering questions,
Yes and no (Score:4, Informative)
I use AI for the following tasks:
(1) generating cartoon-like illustrations for my web site, because I have no artistic talent and the output of AI good enough for my tiny personal site.
(2) Transcribing speech to text to generate video captions (using whisper-cpp [github.com]
(3) Generating speech from text with Piper TTS [ttstool.com] because it generates really high-quality speech.
(4) Removing the backgrounds from images with RemBG [github.com] because it does a decent job with very little effort.
All of the processing is done locally on my computer (except for the image generation in point 1.) I do not use LLMs such as ChatGPT or coding assistants because I find them useless and untrustworthy.
I us eit when programming (Score:2)
I use it much like a community to get help with programming, asking questions such as "what does this statement do?" or "what are ways to do X?" or "What is wrong with this code ...?" or " Can I accomplish X by doing this using this code...? I don't use it to simply 'vibe code' but to help me get over a hurdle when I can't figure something out; much like how a community works but with instant response.
I view it as an adjunct and learning tool; not to simply produce cut and paste code. If it generates code
Use it sometimes for work (Score:2)
The last time I thought, let's see if Chatgpt has some extra suggestions on the topic. I gave it the text of my presentation. It answered, "your presentation is already reasonably well structured, and I will make the text flow a bit better" - being not native English I appreciated the edits; and then it suggested some extra slides as I had asked. I checked with chatgpt and using google on these; and found them niche yet quite well
Treat AI as your junior / intern (Score:2)
It will think outside the box, and maybe even come up with novel methods.
But none of this is of any use unless you are senior / experienced enough to ask the right questions
and guide your junior/intern towards optimal solutions.
In short its a tool but its a useless tool if the operator has little or no
domain knowledge in the areas for which the tech is being used.
I'm writing a novel (Score:4, Interesting)
Running analysis (Score:4, Interesting)
We are in chemical manufacturing. We use it to answer questions like âoewhat are possible causes of variation among all lots of product code Xâ. It analyzes all lots of ingredients, process conditions and gives us options for further examination.
Used it to configure my home server (Score:2)
Using Warp terminal, it actually nice for a non-admin to ask questions to Claude and get some really helpful work.
I do not know every in and out of Linux server config, my day job doesn't depend on that I do. So I can connect up, ask Claude, "is this service running?" or " My plex server isn't responding, can we run some diagnostics?"
Is it perfect? No, is it better than me? Oh god yes. Is my system a mission critical server? Not in the slightest.
But its fun, I actually can get a working docker serve
sure, it needs editing/oversight (Score:2)
Yeah, needs a ton of oversight, etc, I have to read it carefully...
So I wanted to do a thing, and I could easily have made a working prototype of this thing in a couple of days, maybe a week tops. So I had claude do it and spent nearly an entire afternoon fixing things or making claude fix them because I understood the problem better.
But at the end of the day, I'd spent an afternoon doing something I would have expected to take a week.
Useful for targeted tasks (Score:2)
I've used AI for two tasks where I found it very useful, and some very minor ones as well.
1) I had to write some code to invert a matrix in C. I knew the code was out there, but Google's search is so polluted today I could not find it. ChatGPT immediately returned working code. I noticed it did not calculate the determinant, so I asked it for that, and it modified the code to do so. As I say, I know that code is out there somewhere in a book, probably a dozen books, but I can no longer find older topics bec
I Use It Git Commit (Score:3)
Tried it. (Score:2)
I try every now and then (Score:2)
It is completely worthless for anything that isn't completely trivial.
Even the glorified pay-for-use versions.
Software, electronic hardware, history questions - no matter what you ask it, the "AI" is worthless.
There is limited utility in some machine learning methods when sifting through bad data from physics experiments, but even there the applications need triple checking.
The only thing where the "AI" excels is creating trolling posts for maga-like readers.
Outsourced Junior Developer (Score:2)
Like the title says, its effectively an outsourced junior dev. It has no experience of its own and is just parroting what it read, imperfectly. Plus you probably aren't generating any institutional knowledge.
Source code + compiler =reliable output , prompts + LLM-du-jour = inconsistency. Vibe-coding is going to leave a lot of people with zero actual knowledge of their product as if they had out-sourced the development.
Because that's what they did.
So with that out of the way...its really good at first-dra
Yes, but... (Score:2)
I don't use AI in cases of creative expression. Using AI for that feels downright dishonest, lazy, and unsatisfying.
However, when I need some information which is not straightforward, I use it. However, only if search engines don't give relevant information from which I can extrapolate things on my own.
I prefer to use my own brains.
Test it occasionally (Score:2)
So I could see them being useful for a complete beginner (as long as the beginner checks the results) or as a way to fill in some common boilerplate stuff, but it's not really useful for anything beyond that.
Most of the code LLMs have given me (in Bash,
Very helpful if the subject is also your expertise (Score:3)
Yes, I use AI some (Score:2)
Great for genetics. (Score:2)
ChatGPT Pro Deep Research has been very useful when researching my own genome. I use chatgpt to write the prompts for Deep Research, and AFAICT the results are accurate. From time to time, I ask Grok to check the output for correctness.
Useless at finding media (Score:2)
I read a great deal, particularly speculative fiction. I've tried repeatedly to use various AI tools to track down a short story or book where I can remember details of the story, and perhaps the rough publishing date, but not the title, author, or publisher.
AI appears to be utterly useless for this. It will either come up empty and make vague suggestions, like "Look at fantasy recommendations for the date range you've provided". Or worse, it will focus on the wrong book (say, one published in the 2020s, wh
Speech (Score:2)
Is there anything better for TTS, STT, and translation?
Problem I recently worked on:
Here is 20,000 hours of audio. Make it queryable.
Back in the 90's when I was doing some grad classes in Information Retrieval this would have been considered nearly insurmountable.
On the other hand I had 16MB then, now this takes 128GB of RAM.
That's mostly Python being obscene with RAM.
Normal (Score:2)
"Everything required some form of correction or editing before it was usable."
So just like a human assistant.
I'm using it for Open Source Programming (Score:3)
Re: its just auto predict (Score:2)
Re: (Score:2)
Probably an AI editor.