After 2 years it’s quite clear that LLMs still don’t have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.
Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don’t use it for anything that has to do communication, I find it unnecessary and disrespectful, since it’s quite clear when the output is from a LLM.
For these reasons, I generally think it’s a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce “AI” (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don’t understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.
As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it’s pretty crap. I don’t see any use for it in my personal life.
I think its influence is negative overall. Right now it might be useful for programming questions, but that’s only the case because it’s fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it’ll become less useful for future problems, while having effectively killed those useful sites in the process.
Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.
deleted by creator
Impact?
My company sells services to companies trying to implement it. I have a job due to this.
Actual use of it? Just wasted time. The verifiable answers are wrong, the unverifiable answers don’t get me anywhere on my projects.
Thank you for your honest answer from this perspective.
I genuinely appreciate being able to word my questions differently than old google, and specifying deeper into my doubts than just a key word search.
It’s great to delve into unknown topics with, then to research results and verify. I’ve been trying to get an intuitive understanding of cooking ingredients and their interaction with eachother and how that relates to the body, ayurvedically.
I think it’s a great way to self-educate, personally.
I’m a coding hobbyist, it’s been very helpful in analyzing bugs, giving quick info about syntax and converting formatting for long sections where manually typing would be time intensive.
Point taken by someone else here saying continued use of AI may mean decreased functionally for stack exchange et al. That said, the advantage of AI is that it’s answering your question specifically, instead of spending time sifting through semi related answers.
Outside of code it’s good at aping the form of various genres. So if I need to answer an RFP question in a sales proposal, I might feed it the prompt to get a starting point. It always needs editing since it doesn’t know the details of our business and because it’s writing style is bland, but it’s helpful to get a first draft.
ChatGPT has had absolutely zero impact on my work or personal life. I do not have any useful case for it whatsoever. I have used it for goofs before. That’s about it. I cannot see it as a positive or negative influence…as it has had zero influence. I do get annoyed that every company and their mother is peddling worthless AI shit that most people have no use case for.
My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers’ work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren’t even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.
Luckily we don’t need accurate info for training reference manuals, it’s not like safety is involved! …oh wait
It’s useful when you want to write some algorithm using specific versions of libraries. It first craps out wrong functions but after 1 or 2 redirects it usually shoots something that I then adapt to my use-case. I usually try googling it first but when most fucking guides use the new way of coding and I’m forced to use fixed versions due to company regulations, it gets frustrating to check if every function of known algorithms is available in the version I’m using and if it’s not, which replacement would be appropriate.
It might hallucinate from time to time but it usually gives me good enough ideas/alternatives for me to be able to work around it.
I also use it to format emails and obscure hardware debugging. It’s pretty bad but pretty bad is better than again, 99% of google results suggesting the same thing. GPT suggests you a different thing once you tell it you tried the first one.
As always, it’s a tool and knowing that the answers aren’t 100% accurate and you need to cross-check them is enough to make it useful.
I use it as a glorified google search for excel formulas and excel troubleshooting. That’s about it. ChatGPT is the most overhyped bullshit ever. My company made a huge push to implement it into fucking everything and then seemingly abandoned it when the hype died down.
A game changer in helping me find out more about topics that have wisdom buried in threads of forum posts. Great to figure out things I have only fuzzy ideas or vague keywords that might be inaccurate. Great at explaining things that I can follow up on questions about details. Great at finding equations I need but I do not trust it one bit to do the calculations for me. Latest gen also gives me sources on request so I can double check and learn more directly from the horse’s mouth.
More things I come to think of: Great for finding specs that have been wiped from manufacturers site. Great for making summaries and comparisons, filtering data and making tables to my requests. Great at rubberducking when I try fix something obscure in Linux though documentation it refers to is often outdated. Still works good for giving me flow and ideas of how to move on. Great at compiling user experiences for comparisons, say for varieties of yeasts or ingredients for home-brewing. This ties into my first comment about being a game changer for information in old forum threads.
I’ve implemented two features at work using their api. Aside from some trial-and-error prompt “engineering” and extra safeguards around checking the output, it’s been similar to any other api. It’s good at solving the types of problems we use it for (categorization and converting plain text into a screen reader compliant (WCAG 2.1) document). Our ambitions were greater initially, but after many failures we’ve settled on these use cases and the C-Suite couldn’t be happier about the way it’s working.
It’s made my professional life way worse because it was seen as an indication that the every hack-a-thon attempt to put a stupid chat bot in everything is great, actually.
The most impact it has is in my work life. I do design reviews and suddenly AI/ML projects became priorities and stuff has to be ready for the next customer showcase, which is tomorrow. One thing I remember from a conference I attended was an AI talk where the presenter said something along the lines of: If you think devs are bad with testing code in production, wait till you meet data scientists who want to test using live data.
Been using Copilot instead of CharGPT but I’m sure it’s mostly the same.
It adds comments and suggestions in PRs that are mostly useful and correct, I don’t think it’s found any actual bugs in PRs though.
I used it to create one or two functions in golang, since I didn’t want to learn it’s syntax.
The most use Ive gotten out of it is to replace using Google or Bing to search. It’s especially good at finding more obscure things in documentation that are hard to Google for.
I’ve also started to use it personally for the same thing. Recently been wanting to startup the witcher 3 and remembered that there was something missable right at the beginning. Google results were returning videos that I didn’t want to watch and lists of missable quests that I didn’t want to parse through. Copilot gave me the answer without issue.
Perhaps what’s why Google and Ms are so excited about AI, it fixes their shitty search results.
Perhaps what’s why Google and Ms are so excited about AI, it fixes their shitty search results.
Google used to be fantastic for doing the same kinds of searches that AI is mediocre at now, and it went to crap because of search engine optimization and their AI search isn’t any better. Even if AI eventually improves for searching, search AI optimization will end up trashing that as well.