I would love if helpdesks moved to ChatGPT. Phone support these days is based off of a rigid script that is around as helpful as a 2000s chatbot. For example, the other day I was talking to AT&T support, and the lady asked me what version of Windows I was running. I said, I'm running Ubuntu. She repeated the question. I said I'm not running Windows, it's Linux. She repeated the question. I asked why it mattered for my internet connection. She repeated the question. Finally, I lied and said I'm using Windows 10, and we were able to get on to the next part of the script. ChatGPT would have been a lot better.
Or ChatGPT would have hallucinated options to check.
The last four chats with ChatGPT (not GPT4) where a constant flow of non existent API functions with new hallucinations after each correction until we reached full circle.
ATT level 1 support is dumber than a box of rocks, the problem is AI isn't going to help here. The AI is going to be taught to be just as dumb.
Years ago I had a business DSL customer with a router and static IP. From everything in my testing it appeared that traffic broke somewhere at the local telco, not with my modem. It took 8 straight hours of arguing with L1 that no, it is not my windows. No, we have a router and it's not a computer issue. No, it's not the router (we could put the router in DHCP mode and it would work), it was an issue with static IP.
The next day we finally broke out of the stupid loop and got to IP services, who where just as confused. Eventually they were on the phone with people on the floor of the local office. A card of some type had been pulled and put in the wrong slot. Ooof.
Well, I didn't say that support today is always good. But by construction ChatGPT will never be able to answer a question that was not written down and trained (unless it hallucinates it, and many times the answer will be completely wrong).
I can read the website, I don't need a fake person to give me the information available on the website. When I contact support, it's because I need to talk to a human.
I work as a ethical hacker, so I'm well aware of the phishing and impersonation possibilities. But the net positive is so, so much bigger for society that I'm sure we'll figure it out.
And yes, in 20 years you can tell your kids that 'back in my day' support consisted of real people. But truthfully, as someone who worked on a ISP helpdesk it's much better for society if these people move on to more productive areas.
> But truthfully, as someone who worked on a ISP helpdesk it's much better for society if these people move on to more productive areas.
But is it, though? I started my career in customer support for a server hosting company, and eventually worked my way up to sysadmin-type work. I would not have been qualified for the position I eventually moved to at the start, I learned on the job. Is it really better for society if all these entry level jobs get automated, leaving only those with higher barriers to entry?
Historically this exact same thing has happened, it was one of the bigger arguments against the abolition of child labour. "How will they grow up to be workers if they're not doing these jobs where they can learn the skills they'll need?"
The answer then was extending schooling, so that people (children at the time) could learn those skills without having their labour exploited. I would argue we should consider that today, extend mandatory free schooling. The economic purpose of education is that at the end of it the person should be able to have a job, removing entry level jobs doesn't change the economic purpose of education, so extend education until the person is able to have a job at the end of it again.
The social purpose of schooling is to make good members of society, and I don't think that cause would be significantly harmed by extending schooling in order for students to have learned enough to be more capable than an LLM in the job market.
> But the net positive is so, so much bigger for society that I'm sure we'll figure it out.
Considering that the democratic backsliding across the globe is coincidentally happening at the same time as the rise of social media and echo chambers, are we sure about that? LLM have the opportunity to create a handcrafted echo chamber for every person on this planet, which is quite risky in an environment where almost every democracy of the planet is fighting against radical forces trying to abolish it.
I don’t think we know how these net out. AFAICT the negative use cases are a lot more real than the positive ones.
People like to just suppose that these will help discover drugs and design buildings and what not, but what we actually know they’re capable of doing is littering our information environment at massive scale.
I find this very interesting. If you work as an ethical hacker, I believe you see the blackhat potential there.
But you don't see the positive, you just have faith. That's beautiful in a way, but dangerous too. Just like the common idea that "I have faith that somebody will find a technological solution to climate change". When the risk is that high, I think we should take a step back and don't bet our survival on faith.
If that's the only downside that you see... I guess enhanced phishing/impersonation and all the blackhat stuff that come with it don't count.
I for one already miss the time where companies had support teams made of actual people.