• @turtlesareneat@discuss.online
        link
        fedilink
        English
        320 days ago

        Depends on the implementation.

        Just about everyone I know loves how iPhones can take a picture and readily identify a plant or animal. That’s actually delightful. Some AI tech is great.

        Now put an LLM chatbox where people expect a search box, and see what happens… yeah that shit sucks.

      • @Hudell@lemmy.dbzer0.com
        link
        fedilink
        English
        2
        edit-2
        20 days ago

        Whenever I ask random people who are not on IT, they either don’t know about it or they love it.

        People who don’t know what it is are often amazed by how much it looks like a real person and don’t even think about the answers it gives being right or not.

        • @RedditIsDeddit@lemmy.world
          link
          fedilink
          English
          -119 days ago

          I work in IT and have recently been having a lot of fun leveraging AI in my home lab to program things as well as doing audio\video generation (which is a blast honestly.) So… I mean, I think it really depends on how it’s integrated and used.

          • @froztbyte@awful.systems
            link
            fedilink
            English
            119 days ago

            “I work in IT” says the rando, rapaciously switching between support tickets in their web browser and their shadow-IT personal browser

            “I’ve been having a lot of fun” continues the rando, in a picture-perfect replica of every other fucking promptfan posting the same selfish egoist bullshit

            “So… I mean, I think it really depends on how it’s integrated and used” says thee fuckwit, who can’t think two words beyond their own fucking nose

  • @RvTV95XBeo@sh.itjust.works
    link
    fedilink
    English
    321 days ago

    Maybe I’m just getting old, but I honestly can’t think of any practical use case for AI in my day-to-day routine.

    ML algorithms are just fancy statistics machines, and to that end, I can see plenty of research and industry applications where large datasets need to be assessed (weather, medicine, …) with human oversight.

    But for me in my day to day?

    I don’t need a statistics bot making decisions for me at work, because if it was that easy I wouldn’t be getting paid to do it.

    I don’t need a giant calculator telling me when to eat or sleep or what game to play.

    I don’t need a Roomba with a graphics card automatically replying to my text messages.

    Handing over my entire life’s data just so a ML algorithm might be able to tell me what that one website I visited 3 years ago that sold kangaroo testicles was isn’t a filing system. There’s nothing I care about losing enough to go the effort of setting up copilot, but not enough to just, you know, bookmark it, or save it with a clear enough file name.

    Long rant, but really, what does copilot actually do for me?

    • @sem@lemmy.blahaj.zone
      link
      fedilink
      English
      220 days ago

      Before ChatGPT was invented, everyone kind of liked how you could type in “bird” into Google Photos, and it would show you some of your photos that had birds.

    • @Don_alForno@feddit.org
      link
      fedilink
      English
      021 days ago

      Our boss all but ordered us to have IT set this shit up on our PCs. So far I’ve been stalling, but I don’t know how long I can keep doing it.

    • @ByteJunk@lemmy.world
      link
      fedilink
      English
      021 days ago

      I use it to speed up my work.

      For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.

      Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.

      Then there’s just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? There’s tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a sec…

      • @self@awful.systems
        link
        fedilink
        English
        221 days ago

        it’s really embarrassing when the promptfans come here to brag about how they’re using the technology that’s burning the earth and it’s just basic editor shit they never learned. and then you watch these fuckers “work” and it’s miserably slow cause they’re prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task that’s just a fucking key combo.

        Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.

        how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.

        • CarrotsHaveEars
          link
          fedilink
          English
          2
          edit-2
          20 days ago

          (100% natural rant)

          I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing vf.gU in fucking vim with a fraction of the amount of the energy that’s enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!

          Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even it’s possible. You!

          You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.

          And somehow these executives claim AI improves fucking productivity‽

          • @self@awful.systems
            link
            fedilink
            English
            120 days ago

            exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with ysiw] where the last character is the bracket you want. it’s incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.

            and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electrician’s screwdriver or another semi-specialized tool. there’s a million things you can do with it, but we don’t teach any of them to programmers. there’s no vim or emacs class, and I’ve seen the quality of your average bootcamp’s vscode material. your average programmer bounces between fad editors depending on what’s being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldn’t be — I’d gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.

            and now we’re trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how you’re just jealous that their nephew is going to become god first, because of course it’s also a weirdo cult underneath it all, that’s what happens when you vilify the concept of knowing fuck all about anything.

      • @Hudell@lemmy.dbzer0.com
        link
        fedilink
        English
        0
        edit-2
        20 days ago

        I use it to parse log files, compare logs from successful and failed requests and that sort of stuff. Other than that and searching, I haven’t found much use for it.

      • @sem@lemmy.blahaj.zone
        link
        fedilink
        English
        020 days ago

        The first two examples I really like since you’re able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?

      • @morbidcactus@lemmy.ca
        link
        fedilink
        English
        020 days ago

        Gotta be real, LLMs for queries makes me uneasy. We’re already in a place where data modeling isn’t as common and people don’t put indexes or relationships between tables (and some tools didn’t really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, it’s usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesn’t really inspire confidence.

        If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if that’s the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, I’m totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk I’m not sold.

        Data understanding is part of the job anyhow, that’s largely conceptual which maybe LLMs could work as an extension for, but I really wouldn’t trust it to generate full on queries in most of the environments I’ve seen, data is overwhelmingly super messy and orgs don’t love putting effort towards governance.

        • @jacksilver@lemmy.world
          link
          fedilink
          English
          020 days ago

          I’ve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.

          Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar I’ve found it common for them to switch between them without warning.

          • @morbidcactus@lemmy.ca
            link
            fedilink
            English
            019 days ago

            Yeah I can totally understand that, Genie is databricks’ one and apparently it’s surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.

            • @jacksilver@lemmy.world
              link
              fedilink
              English
              019 days ago

              Yeah, the more data you have around the database the better, but that’s always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.

              When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?

    • @Ledericas@lemm.ee
      link
      fedilink
      English
      021 days ago

      same here, i mostly dont even use it on the phone. my bro is into it thought, thinking ai generate dpicture is good.

      • @RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        021 days ago

        It’s a fun party trick for like a second, but at no point today did I need a picture of a goat in a sweater smoking three cigarettes while playing tic-tac-toe with a llama dressed as the Dalai Lama.

          • @meowMix2525@lemm.ee
            link
            fedilink
            English
            120 days ago

            That wasn’t that hard to do in the first place, and certainly isn’t worth the drinking water to cool whatever computer made that calculation for you.

    • @Flipper@feddit.org
      link
      fedilink
      English
      021 days ago

      Apparently it’s useful for extraction of information out of a text to a format you specify. A Friend is using it to extract transactions out of 500 year old texts. However to get rid of hallucinations the temperature reds to be 0. So the only way is to self host.

      • @daellat@lemmy.world
        link
        fedilink
        English
        121 days ago

        Well, LLMs are capable (but hallucinant) and cost an absolute fuckton of energy. There have been purpose trained efficient ML models that we’ve used for years. Document Understanding and Computer Vision are great, just don’t use a LLM for them.

      • @zurohki@aussie.zone
        link
        fedilink
        English
        320 days ago

        I tried feeding Japanese audio to an LLM to generate English subs and it started translating silence and music as requests to donate to anime fansubbers.

        No, really. Fansubbed anime would put their donation message over the intro music or when there wasn’t any speech to sub and the LLM learned that.

      • @Dragonstaff@leminal.space
        link
        fedilink
        English
        220 days ago

        We’ve had speech to text since the 90s. Current iterations have improved, like most technology has improved since the 90s. But, no, I wouldn’t buy a new computer with glaring privacy concerns for real time subtitles in movies.

      • @Bytemeister@lemmy.world
        link
        fedilink
        English
        120 days ago

        You’re thinking too small. AI could automatically dub the entire movie while mimicking the actors voice while simultaneously moving their lips and mouth to form the words correctly.

        It would just take your daily home power usage to do a single 2hr movie.

  • @yarr@feddit.nl
    link
    fedilink
    English
    219 days ago

    These “AI Computers” are a solution looking for a problem. The marketing people naming these “AI” computers think that AI is just some magic fairy dust term you can add to a product and it will increase demand.

    What’s the “killer features” of these new laptops, and what % price increase is it worth?

    • @bitofhope@awful.systems
      link
      fedilink
      English
      119 days ago

      What’s the “killer features” of these new laptops

      LLM

      and what % price increase is it worth?

      negative eighty, tops

  • @yesman@lemmy.world
    link
    fedilink
    English
    221 days ago

    Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldn’t be giving it away, much less begging you to use it.

    • @lev@slrpnk.net
      link
      fedilink
      English
      121 days ago

      You’re right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.

      • @k0e3@lemmy.ca
        link
        fedilink
        English
        121 days ago

        Speaking of off-putting, that friggin copilot logo floating around on my Word document is so annoying. And the menu that pops up when I paste text — wtf does “paste with Copilot” even mean?

    • @jonhendry@awful.systems
      link
      fedilink
      English
      119 days ago

      It’s partly that and partly a mad dash for market share in case the get it to work usefully. Although this is kind of pointless because AI isn’t very sticky. There’s not much to keep you from using another company’s AI service. And only the early adopter nerds are figuring out how to run it on their own hardware.

    • @dreugeworst@lemmy.ml
      link
      fedilink
      English
      1
      edit-2
      21 days ago

      afaict they’re computers with a GPU that has some hardware dedicated to the kind of matrix multiplication common in inference in current neural networks. pure marketing BS because most GPUs come with that these days, and some will still not he powerful enough to be useful

  • @TommySoda@lemmy.world
    link
    fedilink
    English
    121 days ago

    I don’t even want Windows 11 specifically because of AI. It’s intrusive, unnecessary, and the average person has no use for it. The only time I have used AI for anything productive was when I needed to ask some very obscure questions for Linux since I’m trying to get rid of Windows entirely.

  • Cyrus Draegur
    link
    fedilink
    English
    121 days ago

    Oh we care alright. We care about keeping it OUT of our FUCKING LIVES.

  • @TheThrillOfTime@lemmy.ml
    link
    fedilink
    English
    121 days ago

    AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesn’t do what was promised and nobody gives a shit.

    • snooggums
      link
      fedilink
      English
      121 days ago

      Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.

      HD-DVD was comparable to BluRay and just happened to lose out because the industry won’t allow two similar technologies to exist at the same time.

      Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.

    • @blarth@thelemmy.club
      link
      fedilink
      English
      020 days ago

      No, I’m sorry. It is very useful and isn’t going away. This threads is either full of Luddites or disingenuous people.

      • @self@awful.systems
        link
        fedilink
        English
        120 days ago

        nobody asked you to post in this thread. you came and posted this shit in here because the thread is very popular, because lots and lots of people correctly fucking hate generative AI

        so I guess please enjoy being the only “non-disingenuous” bootlicker you know outside of work, where everyone’s required (under implicit threat to their livelihood) to love this shitty fucking technology

        but most of all: don’t fucking come back, none of us Luddites need your mid ass

  • @merdaverse@lemmy.world
    link
    fedilink
    English
    119 days ago

    What is even the point of an AI coprocessor for an end user (excluding ML devs)? Most of the AI features run in the cloud and even if they could run locally, companies are very happy to ask you rent for services and keep you vendor locked in.

  • lettruthout
    link
    fedilink
    English
    121 days ago

    No thanks. I’m perfectly capable of coming up with incorrect answers on my own.

    • David GerardOPM
      link
      fedilink
      English
      319 days ago

      sometimes a thread breaks containment, the “all” algorithm feeds it to even more people, and we see that Lemmy really does replace Reddit

  • magnetosphere
    link
    fedilink
    121 days ago

    One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.

    This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.

    • luciole (he/him)
      link
      fedilink
      English
      021 days ago

      I’m making a generous assumption by suggesting that “ready” is even possible

      To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.

      • @Soyweiser@awful.systems
        link
        fedilink
        English
        121 days ago

        There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn’t going to be possible to get where they want to go.

        • David GerardOPM
          link
          fedilink
          English
          121 days ago

          OpenAI admitted that with o1! they included graphs directly showing gains taking exponential effort

  • @Rin@lemm.ee
    link
    fedilink
    English
    120 days ago

    AI on phones peaked with MS Contana on W10 mobile circa 2014. “Remind me to jack off when I’m home”. And it fucking did what i wanted. I didn’t even have to say words, i could type it into a text box… it also worked offline.

    • @morbidcactus@lemmy.ca
      link
      fedilink
      English
      120 days ago

      Seriously missed an opportunity to being that back as their agent.

      Legitimately though, Cortana was pretty great. There was a feature to help plan commutes (before I went more or less full remote), all it really did was watch traffic and adjust a suggest time to depart but it was pretty nice.

      Say it every time someone mentions WP7/8/10, those lumia devices were fantastic and I totally miss mine, the 1020 had a fantastic camera on it, especially for the early 2010s