• 0 Posts
  • 4 Comments
Joined 7 months ago
cake
Cake day: December 7th, 2023

help-circle

  • In a way, I’m glad people are slowing starting to come around and pay attention to this. For years, any time I would publicly complain about Amazon customer service online, it was very common for people to be completely dismissive or even blame me. I’d hear statements like “sure Amazon sucks, but they have great customer service” and I’d think to myself, just wait until it’s your time to find out that the customer service isn’t what you think it is.

    Long story short, the item came with a broken part. Should have been quick and easy to rectify (send a replacement part, send a replacement unit, or refund the purchase). The seller was completely unhelpful. Amazon customer service would not intervene and insisted that I continue fruitlessly corresponding with the vendor, even though they had an “A-to-Z” money back guarantee if something goes wrong. It literally took months of back and forth between me, the vendor, and Amazon customer service before things were finally refunded in full.

    So, basically I gave them another chance and they showed that things hadn’t improved a bit.



  • Anecdotally speaking, I’ve been suspecting this was happening already with code related AI as I’ve been noticing a pretty steep decline in code quality of the code suggestions various AI tools have been providing.

    Some of these tools, like GitHub’s AI product, are trained on their own code repositories. As more and more developers use AI to help generate code and especially as more novice level developers rely on AI to help learn new technologies, more of that AI generated code is getting added to the repos (in theory) that are used to train the AI. Not that all AI code is garbage, but there’s enough that is garbage in my experience, that I suspect it’s going to be a garbage in, garbage out affair sans human correction/oversight. Currently, as far as I can tell, these tools aren’t really using much in the way of good metrics to rate whether the code they are training on is quality or not, nor whether it actually even works or not.

    More and more often I’m getting ungrounded output (the new term for hallucinations) when it comes to code, rather than the actual helpful and relevant stuff that had me so excited when I first started using these products. And I worry that it’s going to get worse. I hope not, of course, but it is a little concerning when the AI tools are more consistently providing useless / broken suggestions.