I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
I use it to explain dumber questions i have about math and coding concepts.
I use it to write scripts.
I used it to interpret my rental lease and calculate penalties and see whats covered by my landlord vs myself.
because of the way it’s trained on internet data, large models like ChatGPT can actually work pretty well as a sort of first-line search engine. My girlfriend uses it like that all the time especially for obscure stuff in one of her legal classes, it can bring up the right details to point you towards googling the correct document rather than muddling through really shitty library case page searches.
ChatGPT can be useful or fun every now and then but besides that no.
AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.
Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.
Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.
anyone who uses machine learning like that would probably take issue with it being called AI too
Meh, language evolves. Can’t fight it, might as well join them.
I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It’s not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I’ve already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It’s extremely helpful, and definitely faster for what I’d already have to do.
I guess it depends on what you use it for ¯\_(ツ)_/¯.
I hope it continues to improve. I hope we get full open source. If I could “teach” it to do certain tasks someday, that would be friggin awesome.
I created a funny AI voice recording of Ben Shapiro talking about cat girls.
I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.
Sometimes it’s easier to check ChatGPT’s answers, ask follow up questions, look at the sources it provides and live with the occasional hallucinations than to sift through the garbage pile that google search has become.
I thought it was pretty fun to play around with making limericks and rap battles with friends, but I haven’t found a particularly usefull use case for LLMs.
Chat GPT enabled me to automate a small portion of my former job. So that was nice.
I use it often for grammar and syntax checking
I tried to give it a fair shake at this, but it didn’t quite cut it for my purposes. I might be pushing it out of its wheelhouse though. My problem is that, while it can rhyme more or less adequately, it seems to have trouble with meter, and when I do this kind of thing, it revolves around rhyme/meter perfectionism. Of course, if I were trying to actually get something done with it instead of just seeing if it’ll come up with something accidentally cool, it would be reasonable to take what it manages to do and refine it. I do understand to some extent how LLMs work, in terms of what tokens are and why this means it can’t play Wordle, etc., and I can imagine this also has something to do with why it’s bad at tightly lining up syllable counts and stress patterns.
That said, I’ve had LLMs come up with some pretty dank shit when given the chance: https://vgy.me/album/EJ3yPvM0
Most of it is either the LLMs shitting themselves or GPT doing that masturbatory optimism thing. Da Vinci’s “Suspicious mind…” in the second image is a little bit heavyish though. And those last two (“Gangsterland” and “My name is B-Rabbit, I’m down with M.C.s, and I’m on the microphone spittin’ hot shit”) are god damn funny.
I like asking ChatGPT for movie recommendations. Sometimes it makes some shit up but it usually comes through, I’ve already watched a few flicks I really like that I never would’ve heard of otherwise
Until it makes shit up that the original work never said.
The services I use, Kagi’s autosummarizer and DeepL, haven’t done that when I’ve checked. The downside of the summarizer is that it might remove some subtle things sometimes that I’d have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.
LLMs are especially bad for summarization for the use case of presenting search results. The source is just as critical of information for search as the information itself, and LLMs obfuscate this critical source information and combine results from multiple sources together…
tl;dr?
LLMs are TERRIBLE at summarization
Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check
Might want to rethink the summarization part.
AI also hasn’t made any huge improvements in machine translation AFAIK. Translators still get hired because AI can’t do the job as well.
The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC’s five-category, 15-point scale), compared to 12.2 points for the human summaries.
The focus on the (now-outdated) Llama2-70B also means that “the results do not necessarily reflect how other models may perform” the authors warn.
to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms
In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more work if used (in current state), due to the need to fact check outputs, or because the original source material actually presented information better. The assessments showed that one of the most significant issues with the model was its limited ability to pick-up the nuance or context required to analyse submissions.
The duration of the PoC was relatively short and allowed limited time for optimisation of the LLM.
So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it’s definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.
Please share any studies you have showing AI is better than a person at summarizing complex information.
If it wasn’t clear, I am not claiming that AI is better than a person at summarizing complex information.
My bad for misunderstanding you.
Thank you for pointing that out. I don’t use it for anything critical, and it’s been very useful because Kagi’s summarizer works on things like YouTube videos friends link which I don’t care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.
You know those people who have no creative skills or drive, but want to be thought of as a creative?
You know those people who have this really neat idea for an app, but they don’t plan on making it themself because they’re “just an ideas guy”?
You know those people who will offer to pay in exposure? I mean, do you really need to be paid just to draw some pictures anyway?
You know those guys who send you a picture they got from google images and claim this to be a girl they know?
That’s the vast majority of the AI audience. I could probably sum that up with the word “parasite”, but I wanted to be thorough.
I usually keep abreast of the scene so I’ll give a lot of stuff a try. Entertainment wise, making music and images or playing dnd with it is fun but the novelty tends to wear off. Image gen can be useful for personal projects.
Work wise, I mostly use it to do deep dives into things like datasheets and libraries, or doing the boring coding bits. I verify the info and use it in conjunction with regular research but it makes things a lot easier.
Oh, also tts is fun. The actor who played Dumbledore reads me the news and Emma Watson tells me what exercise is next during my workout, although some might frown on using their voices without consent.
You can whip up a whole album of aggressively mid music just cyberbullying the shit out of one person.
New social fear unlocked.
I went for a routine dental cleaning today and my dentist integrated a specialized AI tool to help identify cavities and estimate the progress of decay. Comparing my x-rays between the raw image and the overlay from the AI, we saw a total of 5 cavities. Without the AI, my dentist would have wanted to fill all of them. With the AI, it was narrowed down to 2 that need attention, and the others are early enough that they can be maintained.
I’m all for these types of specialized AIs, and hope to see even further advances in the future.
umm its very much standard ml+vision that has been there for a decade. companies are now just marketing it like crazy, trying to ride the ai hype.
Maybe, maybe not. Neither you or I are familiar enough with it to say one way or another.
Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code “shaders” for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.
Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn’t. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).