It’s not care. Its want. We don’t want AI.
FR I think more people actively dislike it, which is a form of care.
Depends on the implementation.
Just about everyone I know loves how iPhones can take a picture and readily identify a plant or animal. That’s actually delightful. Some AI tech is great.
Now put an LLM chatbox where people expect a search box, and see what happens… yeah that shit sucks.
Whenever I ask random people who are not on IT, they either don’t know about it or they love it.
People who don’t know what it is are often amazed by how much it looks like a real person and don’t even think about the answers it gives being right or not.
That’s a boring perspective fuck you for sharing.
I work in IT and have recently been having a lot of fun leveraging AI in my home lab to program things as well as doing audio\video generation (which is a blast honestly.) So… I mean, I think it really depends on how it’s integrated and used.
“I work in IT” says the rando, rapaciously switching between support tickets in their web browser and their shadow-IT personal browser
“I’ve been having a lot of fun” continues the rando, in a picture-perfect replica of every other fucking promptfan posting the same selfish egoist bullshit
“So… I mean, I think it really depends on how it’s integrated and used” says thee fuckwit, who can’t think two words beyond their own fucking nose
Maybe I’m just getting old, but I honestly can’t think of any practical use case for AI in my day-to-day routine.
ML algorithms are just fancy statistics machines, and to that end, I can see plenty of research and industry applications where large datasets need to be assessed (weather, medicine, …) with human oversight.
But for me in my day to day?
I don’t need a statistics bot making decisions for me at work, because if it was that easy I wouldn’t be getting paid to do it.
I don’t need a giant calculator telling me when to eat or sleep or what game to play.
I don’t need a Roomba with a graphics card automatically replying to my text messages.
Handing over my entire life’s data just so a ML algorithm might be able to tell me what that one website I visited 3 years ago that sold kangaroo testicles was isn’t a filing system. There’s nothing I care about losing enough to go the effort of setting up copilot, but not enough to just, you know, bookmark it, or save it with a clear enough file name.
Long rant, but really, what does copilot actually do for me?
Before ChatGPT was invented, everyone kind of liked how you could type in “bird” into Google Photos, and it would show you some of your photos that had birds.
Our boss all but ordered us to have IT set this shit up on our PCs. So far I’ve been stalling, but I don’t know how long I can keep doing it.
Set it up. People have to find out by themselves.
I use it to speed up my work.
For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
Then there’s just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? There’s tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a sec…
it’s really embarrassing when the promptfans come here to brag about how they’re using the technology that’s burning the earth and it’s just basic editor shit they never learned. and then you watch these fuckers “work” and it’s miserably slow cause they’re prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task that’s just a fucking key combo.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.
(100% natural rant)
I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing
vf.gU
in fucking vim with a fraction of the amount of the energy that’s enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even it’s possible. You!
You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.
And somehow these executives claim AI improves fucking productivity‽
exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with
ysiw]
where the last character is the bracket you want. it’s incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electrician’s screwdriver or another semi-specialized tool. there’s a million things you can do with it, but we don’t teach any of them to programmers. there’s no vim or emacs class, and I’ve seen the quality of your average bootcamp’s vscode material. your average programmer bounces between fad editors depending on what’s being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldn’t be — I’d gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.
and now we’re trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how you’re just jealous that their nephew is going to become god first, because of course it’s also a weirdo cult underneath it all, that’s what happens when you vilify the concept of knowing fuck all about anything.
presumably everyone who has to work with you spits in your coffee/tea, too?
I use it to parse log files, compare logs from successful and failed requests and that sort of stuff. Other than that and searching, I haven’t found much use for it.
and now we’re up to inaccurate, stochastic
diff
. fucking marvelous.Stay tuned for inaccurate, stochastic
ls
.
The first two examples I really like since you’re able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?
they don’t verify any of it
Gotta be real, LLMs for queries makes me uneasy. We’re already in a place where data modeling isn’t as common and people don’t put indexes or relationships between tables (and some tools didn’t really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, it’s usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesn’t really inspire confidence.
If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if that’s the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, I’m totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk I’m not sold.
Data understanding is part of the job anyhow, that’s largely conceptual which maybe LLMs could work as an extension for, but I really wouldn’t trust it to generate full on queries in most of the environments I’ve seen, data is overwhelmingly super messy and orgs don’t love putting effort towards governance.
I’ve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.
Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar I’ve found it common for them to switch between them without warning.
Yeah I can totally understand that, Genie is databricks’ one and apparently it’s surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.
Yeah, the more data you have around the database the better, but that’s always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.
When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?
what in the utter fuck is this post
same here, i mostly dont even use it on the phone. my bro is into it thought, thinking ai generate dpicture is good.
It’s a fun party trick for like a second, but at no point today did I need a picture of a goat in a sweater smoking three cigarettes while playing tic-tac-toe with a llama dressed as the Dalai Lama.
It’s great if you want to do a kids party invitation or something like that
That wasn’t that hard to do in the first place, and certainly isn’t worth the drinking water to cool whatever computer made that calculation for you.
Apparently it’s useful for extraction of information out of a text to a format you specify. A Friend is using it to extract transactions out of 500 year old texts. However to get rid of hallucinations the temperature reds to be 0. So the only way is to self host.
Well, LLMs are capable (but hallucinant) and cost an absolute fuckton of energy. There have been purpose trained efficient ML models that we’ve used for years. Document Understanding and Computer Vision are great, just don’t use a LLM for them.
How about real-time subtitles on movies in any language you want that are always synced?
VLC is working on that with the use of LLMs
I tried feeding Japanese audio to an LLM to generate English subs and it started translating silence and music as requests to donate to anime fansubbers.
No, really. Fansubbed anime would put their donation message over the intro music or when there wasn’t any speech to sub and the LLM learned that.
All according to k-AI-kaku!
We’ve had speech to text since the 90s. Current iterations have improved, like most technology has improved since the 90s. But, no, I wouldn’t buy a new computer with glaring privacy concerns for real time subtitles in movies.
You’re thinking too small. AI could automatically dub the entire movie while mimicking the actors voice while simultaneously moving their lips and mouth to form the words correctly.
It would just take your daily home power usage to do a single 2hr movie.
These “AI Computers” are a solution looking for a problem. The marketing people naming these “AI” computers think that AI is just some magic fairy dust term you can add to a product and it will increase demand.
What’s the “killer features” of these new laptops, and what % price increase is it worth?
What’s the “killer features” of these new laptops
LLM
and what % price increase is it worth?
negative eighty, tops
Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldn’t be giving it away, much less begging you to use it.
You’re right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.
Speaking of off-putting, that friggin copilot logo floating around on my Word document is so annoying. And the menu that pops up when I paste text — wtf does “paste with Copilot” even mean?
It’s partly that and partly a mad dash for market share in case the get it to work usefully. Although this is kind of pointless because AI isn’t very sticky. There’s not much to keep you from using another company’s AI service. And only the early adopter nerds are figuring out how to run it on their own hardware.
WTF is an AI computer? Is that some marketing bullshit?
@Matriks404 @dgerard got it in one! It’s MS’s marketing campaign for PCs with a certain amount of “AI” FLOPS
afaict they’re computers with a GPU that has some hardware dedicated to the kind of matrix multiplication common in inference in current neural networks. pure marketing BS because most GPUs come with that these days, and some will still not he powerful enough to be useful
“Y2k ready” vibes.
I don’t even want Windows 11 specifically because of AI. It’s intrusive, unnecessary, and the average person has no use for it. The only time I have used AI for anything productive was when I needed to ask some very obscure questions for Linux since I’m trying to get rid of Windows entirely.
Oh we care alright. We care about keeping it OUT of our FUCKING LIVES.
AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesn’t do what was promised and nobody gives a shit.
Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.
HD-DVD was comparable to BluRay and just happened to lose out because the industry won’t allow two similar technologies to exist at the same time.
Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.
No, I’m sorry. It is very useful and isn’t going away. This threads is either full of Luddites or disingenuous people.
nobody asked you to post in this thread. you came and posted this shit in here because the thread is very popular, because lots and lots of people correctly fucking hate generative AI
so I guess please enjoy being the only “non-disingenuous” bootlicker you know outside of work, where everyone’s required (under implicit threat to their livelihood) to love this shitty fucking technology
but most of all: don’t fucking come back, none of us Luddites need your mid ass
What is even the point of an AI coprocessor for an end user (excluding ML devs)? Most of the AI features run in the cloud and even if they could run locally, companies are very happy to ask you rent for services and keep you vendor locked in.
No thanks. I’m perfectly capable of coming up with incorrect answers on my own.
Year of Linux
How did this thread blow up so much?
sometimes a thread breaks containment, the “all” algorithm feeds it to even more people, and we see that Lemmy really does replace Reddit
One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
I’m making a generous assumption by suggesting that “ready” is even possible
To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.
There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn’t going to be possible to get where they want to go.
OpenAI admitted that with o1! they included graphs directly showing gains taking exponential effort
The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.
This would only work for a service that customers want or need
AI on phones peaked with MS Contana on W10 mobile circa 2014. “Remind me to jack off when I’m home”. And it fucking did what i wanted. I didn’t even have to say words, i could type it into a text box… it also worked offline.
Seriously missed an opportunity to being that back as their agent.
Legitimately though, Cortana was pretty great. There was a feature to help plan commutes (before I went more or less full remote), all it really did was watch traffic and adjust a suggest time to depart but it was pretty nice.
Say it every time someone mentions WP7/8/10, those lumia devices were fantastic and I totally miss mine, the 1020 had a fantastic camera on it, especially for the early 2010s
The only real purpose of AI is to get sweet VC money. Beyond that…
The fuck does Microsoft need VC money for?
They don’t, its data mining.