> ChatGPT is excellent at writing "fluffy" pieces full of empathy, compassion, PR talk, politician speech,
I think you're putting a bunch of completely different things in the same basket.
Making the text more fluffy will not make it more empathetic, not only (but mainly) because there isn't anyone to empathise with the respondent. Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe, putting it at the same level as PR talk. But that's not empathy or compassion, just cheap and obvious packaging.
> As an engineer who is unable to produce such writing, this tool is quite helpful!
I genuinely thought you were being sarcastic here. Of course you are and you don't need to use more filler words for that.
I do think that there's value in using GPT for training purposes here, that is learning how one could express themselves in a more context appropriate way. This is not much different from stylistic advice, e.g. in many languages a one word yes/no response to a question is considered neutral, whereas in English it can be considered rude/abrupt, so people use question tags more often.
> Our bullshit receptors are good spotting dishonesty
Is this a bad joke? What exactly leads you to that conclusion? People fall for dishonesty and straight up lies all the time. We're awful at recognizing it. Maybe you're the one in a billion who can spot a lie at a million miles, but the vast majority can't.
>Making the text more fluffy will not make it more empathetic,
Of course not, but empathy in communication (not in action) is full of fluff. It almost requires it.
>Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe
All "genuine" affirmation for the sake of empathy sounds cringe to me. I'd rather the doctor devoted their time to doing their job well instead of trying to make someone feel heard and validated - especially in an ER scenario where they are juggling critical patients. This tech can help with that.
> Of course not, but empathy in communication (not in action) is full of fluff. It almost requires it.
Empathy doesn't require fluff at all. (Think of all the short, poignant messages you've sent or received when someone is upset.)
The corporate need to not give ground on a complaint is where all that reassuring, repetitive, empty BS comes from.
> All "genuine" affirmation for the sake of empathy sounds cringe to me. I'd rather the doctor devoted their time to doing their job well instead of trying to make someone feel heard and validated - especially in an ER scenario where they are juggling critical patients. This tech can help with that.
Honestly with "for the sake of empathy" it sounds like you really don't place a high value on empathy (which is not unusual, or necessarily wrong). But if that is the case you're quite obviously not the right person to assess whether ChatGPT and the like can "help with that" in that context! :-)
>(Think of all the short, poignant messages you've sent or received when someone is upset.)
If it is short and not fluffy enough, it risks sounding dismissive. Those short messages generally pave way for a deeper conversation about the subject.
>Honestly with "for the sake of empathy" it sounds like you really don't place a high value on empathy (which is not unusual, or necessarily wrong).
Not really, I think empathy is important in the right setting, it is not the most important thing. Certainly not in the ER if the doctor is overworked and have lives at stake. If they have the bandwidth, sure. If not, can't blame them.
>But if that is the case you're quite obviously not the right person to assess whether ChatGPT and the like can "help with that" in that context! :-)
Disagree. I know how to sound empathetic for those that need it. Some people need words of affirmations and validation to be lifted. I am not one of them but I understand. It is not that hard. Modern LLMs are more than capable of creating the prose for that and more. There is a time and place for everything though. My empathy generally drives me to action and solving problems.
> I know how to sound empathetic for those that need it.
But that isn't actually empathy, and people can tell the difference.
> Some people need words of affirmations and validation to be lifted.
Not the words. The understanding and sharing that underpins the words.
My point is this: if you think empathy can be faked successfully, you simply aren't the right sort of person to decide whether the results of automated faking with an LLM are valuable to the listener.
Because people can very often tell when empathy is being faked. And when they do discover that empathy is being faked, you are not going to be easily forgiven.
Empathy implicitly involves exposing someone's feelings to the air, as it were, in order to identify that you understand and share them. So faked empathy is variously experienced as insulting, patronising, hurtful, derisive etc.
Using an LLM to create verbose fluffy fake empathy is going to stick out like a sore thumb.
If this isn't something you find easy to understand at a level of feeling, don't fake empathy, especially at volume. Stick to something very simple and an offer of contextually useful help.
> My empathy generally drives me to action and solving problems.
I think this is noble and valuable, and I would in your shoes stick to this. Offers of assistance are a kindness.
But you should never pretend to share someone's feelings if you don't share their feelings. Especially not in volume.
Depending on the situation, can make it actively harder as well. Do I spend the time to verbally fluff up the relative of a patient or go and tend to 5+ other patients that need critical care right now (the example in this article)? If the first one is expected of me, care suffers and the job is harder. I am not dismissing the needs the relative of patient has, don't get me wrong. But in an ER setting it rarely is the priority. If some tech makes it easier to more effectively give that need some additional bandwidth, that is a good thing.
I think you're putting a bunch of completely different things in the same basket.
Making the text more fluffy will not make it more empathetic, not only (but mainly) because there isn't anyone to empathise with the respondent. Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe, putting it at the same level as PR talk. But that's not empathy or compassion, just cheap and obvious packaging.
> As an engineer who is unable to produce such writing, this tool is quite helpful!
I genuinely thought you were being sarcastic here. Of course you are and you don't need to use more filler words for that.
I do think that there's value in using GPT for training purposes here, that is learning how one could express themselves in a more context appropriate way. This is not much different from stylistic advice, e.g. in many languages a one word yes/no response to a question is considered neutral, whereas in English it can be considered rude/abrupt, so people use question tags more often.