Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI

Ask Slashdot: Do You Use AI - and Is It Actually Helpful? 248

"I wonder who actually uses AI and why," writes Slashdot reader VertosCay: Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable.

So what's the point?

Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"

And if that's the case, then when you add up all that correction time... "Is it actually helpful?"

Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Do You Use AI - and Is It Actually Helpful?

Comments Filter:
  • Yes (Score:5, Interesting)

    by phantomfive ( 622387 ) on Sunday June 29, 2025 @07:46AM (#65483742) Journal
    My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.

    My second biggest use is to generate art. Flowers love you [deepai.org]. Time flies like an arrow [deepai.org].
    • Re:Yes (Score:5, Informative)

      by gweihir ( 88907 ) on Sunday June 29, 2025 @08:53AM (#65483852)

      My experience with ChatGPT is that once things become really interesting, it fails to provide sources. That is not good at all.

      • Re:Yes (Score:4, Insightful)

        by phantomfive ( 622387 ) on Sunday June 29, 2025 @09:09AM (#65483884) Journal
        If that happens, you just have to look elsewhere. It can't be trusted. (To be fair, Wikipedia has the same limitation).
        • by gweihir ( 88907 )

          It was for something that I had been looking unsuccessfuly for a while. So all ChatGPT did was add to my frustration. I do not even know whether this was a hallucination or the truth it gave me. Pathetic.

          As for Wikipedia, quality is usually high and you get tons of sources if you do not trust it.

      • by narcc ( 412956 )

        It gets worse. You'll find the sources they do provide very often don't exist or don't support the nonsense the chatbot smeared on your screen.

        • by gweihir ( 88907 )

          Exactly. That is why the result was useless (or worse): High probability of hallucination and no way to verify.

    • While I think the high end models are actually very impressive, the one on top of google search results is a drooling idiot and regularly just makes up nonsense. Just yesterday I searched for information about radiation suits on the "Dune Awakening" game (very fun), and it utterly hallucinated a big bunch of nonsense about it being 3 piece, with exchangable breathing apparatuses and the like. None of that is true. I *think* it was cribbing details from Fallout. (It mentioned "Radaway" tablets, which are fro

      • Yeah Google's Search AI is the worst...

        It can't even get the names of A-List Hollywood actors right if you give it the movie.
      • by narcc ( 412956 )

        You're almost there. What you've discovered is that they don't actually summarize text. (They can't.) They don't do most of the things people think they do.

        Its not a good implementation of generative AI.

        They're all a waste of time. Call me an optimist., but I'm hoping that the nonsense generator at the top of Google's search results will teach more people to think critically and not just blindly accept whatever nonsense a chatbot tells them.

    • My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.

      This is also how I use AI, not for generation but as an efficient first-step replacement for Google Search and Wikipedia. I never understood the criticism of ChatGPT for hallucinations because every source of information I've ever come across in my life has some non-zero probability of inaccuracy, and intelligent reading requires questioning everything that I read. I suppose that that the people who blindly accept all output from ChatGPT are the same ones that are easily swayed by anything they read in we

      • Re:Yes (Score:4, Insightful)

        by narcc ( 412956 ) on Sunday June 29, 2025 @10:48AM (#65484082) Journal

        every source of information I've ever come across in my life has some non-zero probability of inaccuracy

        What that chance >60%? Because that's what you're getting with silly chatbots.

        Further, the nature and type of mistakes you'll find in other sources are fundamentally different from the kind chatbots produce. As for people's trust in their output, they look to the average person like an all-knowing guru. They blindly trust the output because the output looks credible, especially when it includes a "source" and because of all the media hype telling them how amazing they are.

        intelligent reading requires questioning everything that I read

        Nonsense. That would be paralyzing. We all take shortcuts. Even you.

        he same ones that are easily swayed by anything they read in webpages, social media, books, newspapers, news shows, etc.

        That's everyone you. Again, including you. Pretending that you aren't likely makes you more susceptible, not less.

    • Re:Yes (Score:5, Informative)

      by Ol Olsoc ( 1175323 ) on Sunday June 29, 2025 @09:17AM (#65483896)

      My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.

      I was just about to post the same. Search engines - the traditional ones, are less than worthless.

      As an example, I had wanted to find details of a man and wife who drove off a bridge because his GPS told him to. This was 10 or more years ago. Old school, page after page after page after page of a family suing Google because a man drove off a bridge in 2023. Wasting my time.

      DDG AI found it using the same search terms in one hit, with two links.

      • DDG AI found it using the same search terms in one hit, with two links.

        Ah, I didn't realize DuckDuckGo has AI search now. Nice tip, I'll check it out, thanks.

        • https://duckduckgo.com/chat

          100% of your searches reside on local storage only, but who knows what the chatbot API servers keep?
      • Re:Yes (Score:4, Informative)

        by fredrikv ( 521394 ) on Sunday June 29, 2025 @11:24AM (#65484156)

        Not to be that guy, but I put the phrase you wrote "details of a man and wife who drove off a bridge because his GPS told him to" into Google and the second result was about the accident in Chicago 2015. YMMV.

      • Use the time modifier for your search. I find it helps dramatically when searching for older content. It seems newer content has a higher PageRank by default or something.

    • https://tech.yahoo.com/ai/arti... [yahoo.com]
      ""Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system," Altman said at Sequoia Capital's AI Ascent event earlier this month."

      Not a surprise then that a lot of Slashdotters (who tend to be on the older side) emphasize search engine use.

      Insightful video on other options for using AI:
      "Most of Us Are Using AI Backwards -

  • Useful If Verified (Score:5, Interesting)

    by Trip Ericson ( 864747 ) on Sunday June 29, 2025 @07:53AM (#65483746) Homepage

    So I'm not a good programmer. At all. I know just enough to be dangerous, and am significantly slower than someone who would know what they're doing.

    I've been using LLMs (no such thing as AI!) to help me write code for the past two weeks or so. I've been wanting to do these tasks for years but it would have taken me days or weeks to get to a solution to any one of them. I have revolutionized several rote tasks that I did on a regular basis and will save myself tons of time in the future. And I'm still working on more of them.

    The real trick is that you have to verify everything that comes out of it. It's never right the first time, even if you do a good job of describing what you want. I've had to tweak the code directly in some cases where it just won't get it right. And I've seen it get stuck in loops where it just breaks worse and worse. Then it's necessary to grab the last working version of the code, start a new chat, and paste it in with the latest request.

    You can't just say "write code to do x." You have to have some idea of what it's doing and be able to thoroughly test it and validate its results. Do that and it can be useful.

    • by gweihir ( 88907 )

      Don't try that approach with any (somewhat) advanced code. Checking code becomes a lot harder after a very low bound that writing it correctly from scratch. This has been known for ages.

  • A lot (Score:2, Interesting)

    I use ChatGPT daily, not for work, but to ask questions about topics I'm curious about that I'm sure experts don't have time for.

    I also discuss topics that I find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern discourse -- and that's pretty much everything you read about online. For instance, I recently discussed the film Anora, which won five Oscars. It was horrible -- a poor B-movie at best. I have no idea what happened to the Academy, and ChatGPT agreed with me.

    • Re: A lot (Score:4, Interesting)

      by fluffernutter ( 1411889 ) on Sunday June 29, 2025 @08:00AM (#65483756)
      Well sure, it has unlimited patience and knows the entire Internet. Though I I'm concerned you use it for confirmation and value judgements because that can easily be manipulated by the vendor of the AI. Eventually I'm sure Pepsi and Coke will be paying to have AI consider either one the best soft drink in the world and answer accordingly.
  • I was wrong (Score:5, Interesting)

    by fluffernutter ( 1411889 ) on Sunday June 29, 2025 @07:55AM (#65483750)
    I was originally an AI hater, but I have to admit I have saved hours with it. It doesn't do my coding for me per se but asking it to 'provide an example of x' or 'how would I do x' has got me in a single answer that would have taken an hour or more with Google.
  • by CaptainOfSpray ( 1229754 ) on Sunday June 29, 2025 @07:56AM (#65483754)
    It is never OK to use the AI output directly without checking it thoroughly. There are factual errors far too often.

    I asked for a paragraph about my home village. The AI told me very enthusiastically about my parish church, St Andrews.

    In truth, that church has been there for 800 years and has NEVER been named St Andrews.

    Writing it myself is quicker and more likely to be what I want.
  • I use tools like Lovable all day long now, 100% of the code I produce is generated. I do not even type anymore I use speech to text. I read the commits and suggest refactorings. The future of development is no more human teams just one senior dev who leads a team of sotware agents.

  • by dhjdhj ( 1355079 ) on Sunday June 29, 2025 @08:07AM (#65483764)
    This tool has both saved me time and also rescued me from some emergencies. Examples: I used it to write GUI functions. It’s not that I don’t know how to do such things but it can do in 1 minute what would take me a1/2 hour so it’s a massive time saver. I just have to check the code and sometimes tweak it a bit. I consider this to be “busywork” but that alone saves thousands of dollars a month in developer costs. Last year I deleted an entry in a SQL database table not realizing that cascading delete was enabled and I ended up deleting license keys for about 2,000 customers. There was no backup because the person who implemented the backup had recently died and his backup implementation had started failing due to changes in SSH keys.However, the information was available in a database managed by our payment fulfillment company via a pretty complicated REST API. I was able to instruct ChatGPT to “ write a python function to retrieve customer information from that company and generate a SQLinsert query to merge it back into our own database”. After a little tweaking, the damn thing gave me a function that just worked. Since I rarely use Python and SAL, It would have taken me hours if not days to both decipher that REST API, figure out the right SQL query and write the python code. Major customer catastrophe averted. Separately from the above, I’ve found it very useful for quick how-to tips when using applications that I don’t use much, e.g. how do I do”X” in Photoshop, etc. If I was a young recent graduate, I would be very concerned about my future opportunities and I remain very concerned how such tools will have detrimental affect on society due to how well AI can replace what previously required significant expertise and experience,
    • If I was a young recent graduate, I would be very concerned about my future opportunities and I remain very concerned how such tools will have detrimental affect on society due to how well AI can replace what previously required significant expertise and experience,

      That would be true if all the graduate expected to do was write code; what the graduate needs to be think is 'how can I use what I've learned to identify solutions to problems by understanding what is needed and then use the tools to deliver that." The jobs in danger are all those cheap coding shops that employ a bunch of people to churn out code; companies ill be able to do more of that in house or with shops that can understand the need and use tools to deliver it.

  • Absolutely not (Score:5, Interesting)

    by Misagon ( 1135 ) on Sunday June 29, 2025 @08:09AM (#65483768)

    Classic AI, sure. LLMs: absolutely not. I avoid it on principle.

    The crime of stealing people's works for "AI training" has also effectively stopped me from publishing any of my personal source code or CAD drawings on-line.
    Not only do I not want it copied without my consent, I also have some source code that is very very ugly, optimised for specific compilers for embedded systems -- and which should never ever be used as a template by someone else, who is probably not very skilled ... or who is under more stress because their boss has got the delusion that fewer people would be able to do the same amount of work.

    Here in the EU, there is law that says that a content creator has the right to opt out of "AI training". (except for the "for research" loophole).
    However, even if there are well-established file formats and protocols for expressing opt-out (where there aren't), the techbros have already shown over and over again that they don't give a sith -- they have even used pirated works.
    Before there are proper non-AI licenses, and infrastructure in place restricting access for those who are bound to EU jurisdiction and are well-behaving, I don't see how the situation could improve.

    • Re:Absolutely not (Score:4, Interesting)

      by thegarbz ( 1787294 ) on Sunday June 29, 2025 @09:33AM (#65483926)

      The crime of stealing people's works

      LLMs don't need to be based on stealing people's work. The fact that a few models are shouldn't mean you avoid a technology on principle.

      the techbros have already shown over and over again that they don't give a sith

      If it is determined they are actually "stealing" people's work then they will learn that lesson painfully very soon. Especially now that the Mouse is involved, and it does look like Disney may actually have competent lawyers (unlike the group who sued Midjourney and didn't argue a case either for piracy nor for indirect infringement).

    • So in that case you shouldn't get advice from humans either (or hire humans to write code, books, summaries, etc); they a) learn from other people's work, b) they are fallible, and c) from time to time some of them will exploit others' work and claim it for it's own. The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.
      • The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.

        No. Doing this is orthogonal to what it's trained on. Even if you trained AI only on exceptional output which was all created by software it would still hallucinate things that don't exist. This is a fundamental limit to the technology and until it's addressed somehow (whether stopping it, making it recognize it and recalculate, or something else — whatever solution actually is feasible) even feeding it 100% high-quality input will still result in hallucinatory output.

        • by gweihir ( 88907 )

          Indeed. Hallucinations are a primary characteristic of any LLM and _cannot_ be avoided.

  • Sometimes I use AI to write quick functions with bounds checking, it does a decent job, but it always needs tweaking.

    AI shines for debugging, when I have something that "should" work, I copy/paste the errors or problem description into AI and ask "How to Fix ELI5"

    Once, AI even suggested there was a bug in a library, and it was right.

    Note: I pay for Google Gemini 2.5 Pro and find it the best for coding compared to Chat-GPT.

    Gemini does a good job of breaking everything down.

    Here is an example from a mysql ses

  • I use it plenty (Score:5, Interesting)

    by bjoast ( 1310293 ) on Sunday June 29, 2025 @08:21AM (#65483782)
    I use ChatGPT, but mostly just talk to it. I discuss matters with it and basically use it as a 24/7 tutor. I find it very helpful. Using it for code generation has been so-so, however. It is so agreeable that when you communicate your own misunderstandings to it regarding certain constructs, it will hallucinate a universe where that misunderstanding is true, and derive the rest from it. Not all programs it generates work, and most don't fit my very opinionated code style. I still prefer to write my own code completely from scratch, and don't even use LLMs for drafts or boilerplate. I think it's about mindset. You have to treat it like a very knowledgeable, easy-to-work with co-worker, who may be wrong sometimes.
  • Example (Score:4, Interesting)

    by Tailhook ( 98486 ) on Sunday June 29, 2025 @08:22AM (#65483786)

    Yesterday, I wanted an example of a PIO program to generate high resolution, arbitrary waveform (variable frequency) PWM output using DMA on an RP2350 MCU. Gemini 2.5 Pro generated a correct, working, basic example. I refined it further by changing and adding requirements to deal with end state, corner cases and the deficiencies in the generated code. The final result works perfectly. Guessing here: It took probably perhaps 25% of the time to accomplish this than it would have without "AI." And while PIO appears simple, there is actually a lot of subtlety in the hardware that a new PIO programmer, without an AI, would likely either not know provides useful capabilities or would overlook, yielding less than optimal results and/or actual flaws. By using Gemini, I believe the code is on par with what a PIO expert would have produced.

    So yes, it is actually helpful. And no, I don't believe this makes me or others like me obsolete: non-technical people cannot achieve the same results in reasonable amounts of time because they don't even know what to ask for, much less how to evaluate the answers.

    • Yes, this is my experience also, AI will help you tackle something you might not have attempted on your own because you didn't know how to begin. But once you see a good example (geared toward your own goal) you can see how to finish it off.

      I also think of AI chats as very socratic, it's really about the quality and refinement of your questions. The AI dialog or conversation is how you refine what it is you are asking. Once you get to that good question, AI will get you a good answer for it.
  • 1: My default search engine. It rarely needs "correcting". Very little query/prompt refinement as an SE. It is always miles ahead of Google (which seems to need refining more times than not).

    Search string: "chatgpt.com/?q=%s&hints=search&model=gpt-4o" (where model is your preferred model)

    2: Coding. All-of-the-things: C+/VB/VS, shell scripts, Perl, WordPress plugins, PHP, JS, HTML, and even slumming with CSS. For short stuff - it rarely needs "correcting", but some times it does. It eliminates all the big issues when laying out a task. It's guidance turns a day job into an hour job.

    Where it shines for me is in languages that I use very rarely but a project requires it. (I loath js with the fire of a thousand suns - ChatGPT makes it tolerable)

    3: Ideation. It's so good at this. It will think of stuff you miss. It has been way-way over baked, but MS did have it right when they said ai was a Co-pilot. I can't count the times I've had a phone meeting in 5 mins and ask ChatGPT to give me things to talk about and ask.

    4: Content. Yes, we generate all sorts of content with it from blog posts to schedules, to project overviews.

    Obviously, everything needs human oversight, but sure, I've asked it to write stuff, and dropped it in for a test without looking at line-for-line.

    Last week I asked it to lay out a modest wordpress plugin, and without prompting specifically for code, it generated it and it ran, and worked, the first go around.

  • I once asked chatgpt to write a sonnet about constipation. I remember being abused by the result.

    Mostly, though, I try to avoid the brain atrophe device.

    • by gweihir ( 88907 )

      Mostly, though, I try to avoid the brain atrophe device.

      Indeed. These things are dangerous. Use only when needed and then carefully is the name of the game.

  • Let's start with some disclaimer: this is about LLM. Not AI, as it is a very large field of stuff that existed way before LLM became the latest craze, and hopefully will keep existing until we get something impressive out of it.

    Also, some issues with LLM are not stemming from their output, but with economic models around them, privacy issues, licensing issues, etc. To address some of those, most of our daily stuff is done on locally running models on cheap hardware, so no 400B parameters stuff.

    There's four

    • by gweihir ( 88907 )

      I also dipped in so-called "vibe coding" using commercial offers (my small 12B model would not have been fair in that regard. I spent a few hours trying to make something I would consider both basic, easy to find many example of, and relatively useful: a browser extension that intercept a specific download URL and replace it with something else. At every step of the way, it did progress. However, it was a mess. None of the initial suggestion were ok by themselves; even the initial scaffolding (a modern browser extension is made of a json manifest and a mostly blank script) would not load without me putting more info in the "discussion". And even pointing the issues (non-existent constants, invalid json properties, mismatched settings, broken code) would not always lead to a proper fix until I spelled it out. To make it short: it wasn't impressive at all. And I'm deeply worried that people find this kind of fumbling acceptable. I basically ended up telling the tool "write this, call this, do this, do that", which is in no way more useful than writing the stuff myself. At best it can be an accessibility thing for people that have a hard time typing, but it's not worth consideration if someone's looking at a "dev" of some sort.

      Disappointing but expected. And generic URL repacer is not even a "hard" project by any means. I recently talked to somebody with a similar experience. First, the model omitted 8 of the 12 steps the solutuion would have needed and, when asked, claimed that this was correct. And after finally and laborously coerced in solving the full problem and then asked for test code, it provided test code for the first 2 (!) of the 12 steps and then claimed this was 100% test coverage. In the end, tis may be somehwt hel

  • The one thing I found it useful for is searching something when I di not know the specific term. Then I can ask and do a real search afterwards with the term and get context, references, limitations. etc. For anything else, AI is a waste of time. The one time I found something I had searched conventionally without results, the AI was unable to provide an original reference, i.e. the result was worthless.

    Lets face it, this AI hype is a dud. Much the same as all previous ones. Something will come out of it,

  • Yes and Yes (Score:4, Interesting)

    by Wolfling1 ( 1808594 ) on Sunday June 29, 2025 @09:05AM (#65483876) Journal
    I use ChatGPT almost daily. I use it to write short scripts that have very well defined behaviours. Sure, it makes mistakes, and I have to check its code, but it saves me a heap of time looking up obscure functions. And it comments its code quite nicely. Sometimes, the comments are a bit inane, but I've seen so much uncommented code in my life, seeing any comments at all is a breath of fresh air.

    I think that the mistake people make is that they assume that if ChatGPT can write a good 10 line function, then it can write a good 1000 line suite of functions. It cannot.

    Its a tool, and it does very well when it is used in the context in which it performs. The wrong tool will do poorly at any task.

    I would estimate that ChatGPT saves me about 5-6 hours a week. Time that I can spend on my higher skills rather than my grunt code-monkey skills.
  • I use LLMs - primarily ChatGPT - in programming all the time.

    It's a tool. I don't care if it "really" understands anything, that's not what I use tools for.

    Yes, it really saves me time overall. I can't help it if it somehow doesn't suit your problem space (possible), or if you don't know how to or refuse to learn how to use it properly (likely).

  • by timholman ( 71886 ) on Sunday June 29, 2025 @09:15AM (#65483892)

    I rountinely use LLMs for cleaning up prose in various reports. As a proofreader and editor, it's the best tool I've ever found.

    For creating reports from scratch, you have to be careful. It's not perfect, but it will get you 85% to 95% of the way there on a first cut once you feed it the data. It's no replacement for a human, but it does save a lot of time.

    I also use it for email. When I have an important email to send out that must be "perfect", I'll run my draft through ChatGPT and ask it for a review, and to show me what it changed, and why. More than once, it has caught a missing word or a clumsy phrase.

    So yes, LLMs are not a gimmick, and they do increase productivity if used correctly.

  • Even if we had full-on human level AGI (which we don't), you'd still need to iterate and correct.

    You wouldn't expect to give a non-trivial programming task to another human without having to iterate on understanding of requirements and corner cases, making changes after code review, bugs needing to be caught by unit/system tests, etc.

    If you hired a graphic artist to create you a graphic to illustrate an article, or book, or some advertising campaign, then you also wouldn't expect to get it right first time.

  • What I find current LLMs the most useful at is to review my work to give a different and useful perspective.

    The LLM doesn't generate any work that goes into my output, it augments my own work to make it better, while I determine what to use (I am in the drivers seat).

    The LLM has been trained in a huge quantity of good quality human-generated text and it surprises me how good it can be at offering associations that I had not considered.

  • by dremon ( 735466 ) on Sunday June 29, 2025 @09:27AM (#65483912)
    One area where it shines is localization. "Hey bot, find all English strings in the code enclosed in double quotes, for each string create a key in the form 'category-description', for example error-not-found, create a translation file in Fluent format in the form of "key = value" for all those keys and their values. Do that for the following locales: ..."

    With some prompt refinements, corrections, etc, done in 2 hours. Saved me 2 weeks of manual work.

  • by thegarbz ( 1787294 ) on Sunday June 29, 2025 @09:27AM (#65483914)

    Correcting something is quite often far less work / effort than creating something from scratch. Obviously this will depend on context of what the AI is being asked to do .

  • LLMs are thorough and merciless critics/code reviewers. Once I got past my ego and let a machine pass judgment on my code/prose/whatever, I gained an invaluable sounding board.
  • I do not (Score:4, Interesting)

    by sinij ( 911942 ) on Sunday June 29, 2025 @09:37AM (#65483934)
    My view is that AIs are both irredeemably poisoned and also a privacy nightmare designed to profile users at unprecedented level of details.

    I go out of my way to avoid AI-generated slop, going as far as searching with -AI tags. This is mainly because I do not want to be exposed to bias that pollutes AI models (it is black Nazis all the way down) and it is way too much effort to validate each output to filter these out.
  • My #1 use for ChatGPT is "show me an example of some C code that implements functionality (X)".

    Then I can read that example, research the APIs it is calling (to make sure they actually exist and are appropriate for what I'm trying to accomplish), and use it to write my own function that does something similar. This is often much faster than my previous approach (googling, asking for advice on StackOverflow, trial and error).

  • Yes and kinda (Score:2, Interesting)

    by drinkypoo ( 153816 )

    I use LLMs for my own amusement, they are useful for that.

    I have little to no visual memory so I struggle to draw even simple things. I can do drafting-style sketches OK because they are logical, but just remembering the shape of a curve even between looking at the thing and looking at the paper is difficult. So I use AI to generate images and feel zero remorse about it, since it lets me do something I cannot otherwise do — envision a concept I can imagine, but cannot picture.

    For answering questions,

  • Yes and no (Score:4, Informative)

    by dskoll ( 99328 ) on Sunday June 29, 2025 @09:50AM (#65483952) Homepage

    I use AI for the following tasks:

    (1) generating cartoon-like illustrations for my web site, because I have no artistic talent and the output of AI good enough for my tiny personal site.

    (2) Transcribing speech to text to generate video captions (using whisper-cpp [github.com]

    (3) Generating speech from text with Piper TTS [ttstool.com] because it generates really high-quality speech.

    (4) Removing the backgrounds from images with RemBG [github.com] because it does a decent job with very little effort.

    All of the processing is done locally on my computer (except for the image generation in point 1.) I do not use LLMs such as ChatGPT or coding assistants because I find them useless and untrustworthy.

  • I use it much like a community to get help with programming, asking questions such as "what does this statement do?" or "what are ways to do X?" or "What is wrong with this code ...?" or " Can I accomplish X by doing this using this code...? I don't use it to simply 'vibe code' but to help me get over a hurdle when I can't figure something out; much like how a community works but with instant response.

    I view it as an adjunct and learning tool; not to simply produce cut and paste code. If it generates code

  • I needed to do a few presentations on technical topics the last few months.
    The last time I thought, let's see if Chatgpt has some extra suggestions on the topic. I gave it the text of my presentation. It answered, "your presentation is already reasonably well structured, and I will make the text flow a bit better" - being not native English I appreciated the edits; and then it suggested some extra slides as I had asked. I checked with chatgpt and using google on these; and found them niche yet quite well
  • It will think outside the box, and maybe even come up with novel methods.
    But none of this is of any use unless you are senior / experienced enough to ask the right questions
    and guide your junior/intern towards optimal solutions.

    In short its a tool but its a useless tool if the operator has little or no
    domain knowledge in the areas for which the tech is being used.

  • I'm writing a novel (Score:4, Interesting)

    by Iamthecheese ( 1264298 ) on Sunday June 29, 2025 @10:05AM (#65483984)
    AI is absolutely useless for writing a decent story. But it's brilliant at smashing through writers block (suggesting places my story could go next), and research features are great at synthesizing the collective of writing advice from across the internet.
  • Running analysis (Score:4, Interesting)

    by djp2204 ( 713741 ) on Sunday June 29, 2025 @10:08AM (#65483994)

    We are in chemical manufacturing. We use it to answer questions like âoewhat are possible causes of variation among all lots of product code Xâ. It analyzes all lots of ingredients, process conditions and gives us options for further examination.

  • Using Warp terminal, it actually nice for a non-admin to ask questions to Claude and get some really helpful work.

    I do not know every in and out of Linux server config, my day job doesn't depend on that I do. So I can connect up, ask Claude, "is this service running?" or " My plex server isn't responding, can we run some diagnostics?"

    Is it perfect? No, is it better than me? Oh god yes. Is my system a mission critical server? Not in the slightest.

    But its fun, I actually can get a working docker serve

  • Yeah, needs a ton of oversight, etc, I have to read it carefully...

    So I wanted to do a thing, and I could easily have made a working prototype of this thing in a couple of days, maybe a week tops. So I had claude do it and spent nearly an entire afternoon fixing things or making claude fix them because I understood the problem better.

    But at the end of the day, I'd spent an afternoon doing something I would have expected to take a week.

  • I've used AI for two tasks where I found it very useful, and some very minor ones as well.

    1) I had to write some code to invert a matrix in C. I knew the code was out there, but Google's search is so polluted today I could not find it. ChatGPT immediately returned working code. I noticed it did not calculate the determinant, so I asked it for that, and it modified the code to do so. As I say, I know that code is out there somewhere in a book, probably a dozen books, but I can no longer find older topics bec

  • by charles05663 ( 675485 ) on Sunday June 29, 2025 @10:19AM (#65484020) Homepage
    Started using it for Git commits. Saves me the frustration of figuring out what to say.
  • I've tried it for a variety of things, mostly when I couldn't figure them out myself. It tends to hallucinate when it doesn't know the answer. It would be more useful if it just said, "I don't know" on occasion.
  • It is completely worthless for anything that isn't completely trivial.

    Even the glorified pay-for-use versions.

    Software, electronic hardware, history questions - no matter what you ask it, the "AI" is worthless.

    There is limited utility in some machine learning methods when sifting through bad data from physics experiments, but even there the applications need triple checking.

    The only thing where the "AI" excels is creating trolling posts for maga-like readers.

  • Like the title says, its effectively an outsourced junior dev. It has no experience of its own and is just parroting what it read, imperfectly. Plus you probably aren't generating any institutional knowledge.

    Source code + compiler =reliable output , prompts + LLM-du-jour = inconsistency. Vibe-coding is going to leave a lot of people with zero actual knowledge of their product as if they had out-sourced the development.

    Because that's what they did.

    So with that out of the way...its really good at first-dra

  • I don't use AI in cases of creative expression. Using AI for that feels downright dishonest, lazy, and unsatisfying.

    However, when I need some information which is not straightforward, I use it. However, only if search engines don't give relevant information from which I can extrapolate things on my own.

    I prefer to use my own brains.

  • I've tested a variety of chat bots on a regular basis to see how they perform in terms of coding or suggesting logic solutions. I've tried ChatGPT, Copilot, and Llama. Most of the time they are pretty bad, particularly doing anything above entry-level.

    So I could see them being useful for a complete beginner (as long as the beginner checks the results) or as a way to fill in some common boilerplate stuff, but it's not really useful for anything beyond that.

    Most of the code LLMs have given me (in Bash,
  • As a Linux sysadmin, I've been using Google's Gemini a lot recently, asking it many Ansible related questions for example. It's solutions are sometimes pretty far off the mark, but more often than not its answers are accurate and very helpful, sometimes offering solutions I would not have thought of. It's kind of like having a flawed savant as your coding assistant: you can't always take what it gives you too seriously, but it's certainly useful as long as you keep that in mind.
  • I noticed duckduckgo has an AI generated answer thing on their search engine and I used it a few times with good results, I think AI has its uses like for a smarter search engine for inquiries , but like the previous thread on here it is not healthy to lean too heavily on AI and especially not on a personal level as it made some people go off the rails into crazyland,
  • ChatGPT Pro Deep Research has been very useful when researching my own genome. I use chatgpt to write the prompts for Deep Research, and AFAICT the results are accurate. From time to time, I ask Grok to check the output for correctness.

  • I read a great deal, particularly speculative fiction. I've tried repeatedly to use various AI tools to track down a short story or book where I can remember details of the story, and perhaps the rough publishing date, but not the title, author, or publisher.

    AI appears to be utterly useless for this. It will either come up empty and make vague suggestions, like "Look at fantasy recommendations for the date range you've provided". Or worse, it will focus on the wrong book (say, one published in the 2020s, wh

  • Is there anything better for TTS, STT, and translation?

    Problem I recently worked on:

    Here is 20,000 hours of audio. Make it queryable.

    Back in the 90's when I was doing some grad classes in Information Retrieval this would have been considered nearly insurmountable.

    On the other hand I had 16MB then, now this takes 128GB of RAM.

    That's mostly Python being obscene with RAM.

  • "Everything required some form of correction or editing before it was usable."

    So just like a human assistant.

  • by DMJC ( 682799 ) on Monday June 30, 2025 @01:50AM (#65485310)
    I'm making years of progress in minutes, developing tools for open source game development. Asset viewers, asset editors etc. ChatGPT can output a 3D model viewer in seconds when you tell it what API you want to use. My API of choice has been Gtkmm-3.0. I describe the UI I want, usually with a tree view or canvas view. Once I've defined the UI, I define what content I want in it and the system generates it. It is working incredibly well. I've also developed a working Audio control panel for GNUstep that seamlessly integrates into SystemPreferences.app and wrapping pipewire, next will be Network-Manager and and Wayland-randr/X11-Randr Control Panels for monitor setup. It's bringing the dream of a Mac-Like open source desktop closer to reality.

"Sometimes insanity is the only alternative" -- button at a Science Fiction convention.

Working...