This really feels farcical, and kind of like vc bros hyping up a product. Considering this was a serious issue of life and death of a patient, it was surprising how it read like a prank or like the scene from Harold and Kumar, where Kumar plays a fake doctor and saves an ER patient through sheer luck.
I know some doctors are full of baloney but c'mon. I really hope maybe that we create better doctors that have a lot of empathy, and are able to wield technology to make themselves 10x better like we're doing with programming. But, this is not the way.
Edit: I realized its fake...I think, it has to be. It doesn't make sense, like others mention. You're not reading a 5 para speech to a patient's family while they are being treated. And also, in the article, they say to double check and verify what is being said. You better proofread that too! Or it might invent something absolutely crazy, and you may be liable.
The author is a doctor, but also an executive at a health-focused VC fund, with several AI companies in it's portfolio. Helpful context to have before reading his article I think.
I am currently looking for a new Team Lead. I interviewed a guy yesterday who performed well in a soft skills interview and so this was his tech interview with a senior dev and myself. He was doing OK at first. As the questions got a little more nuanced, there were these pauses. Eventually, he got flustered and asked to resume later.
We agreed to this since he said he'd been dealing with a production issue for several hours before the scheduled time (I had offered to reschedule at no detriment to him before we began but he insisted on doing it then).
The developer and I met after this first interview and discussed his answers. We both thought he was "googling" (we are older people, stuck in the past).
Later that day, back on another call with this guy, my senior dev starts typing my questions into ChatGPT and getting back essentially the answers he's giving us. I start with more nuanced questions and he's done. He just kept reiterating platitudes could not explain why he was saying them, etc.
I felt bad for him, he was trying to get a job beyond his technical knowledge. But ultimately, this was dishonest and I would have found out very quickly he was not ready for the job.
If anyone is wondering, I gave him a chance to tell me why he was pausing for so long before answering and why his answers didn't seem to directly address my questions and he just insisted he was NOT pausing (he was, for up to 30 seconds at a time). It was generally just a really disappointing experience for all of us, I would imagine.
> ChatGPT is excellent at writing "fluffy" pieces full of empathy, compassion, PR talk, politician speech,
I think you're putting a bunch of completely different things in the same basket.
Making the text more fluffy will not make it more empathetic, not only (but mainly) because there isn't anyone to empathise with the respondent. Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe, putting it at the same level as PR talk. But that's not empathy or compassion, just cheap and obvious packaging.
> As an engineer who is unable to produce such writing, this tool is quite helpful!
I genuinely thought you were being sarcastic here. Of course you are and you don't need to use more filler words for that.
I do think that there's value in using GPT for training purposes here, that is learning how one could express themselves in a more context appropriate way. This is not much different from stylistic advice, e.g. in many languages a one word yes/no response to a question is considered neutral, whereas in English it can be considered rude/abrupt, so people use question tags more often.
> Our bullshit receptors are good spotting dishonesty
Is this a bad joke? What exactly leads you to that conclusion? People fall for dishonesty and straight up lies all the time. We're awful at recognizing it. Maybe you're the one in a billion who can spot a lie at a million miles, but the vast majority can't.
>Making the text more fluffy will not make it more empathetic,
Of course not, but empathy in communication (not in action) is full of fluff. It almost requires it.
>Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe
All "genuine" affirmation for the sake of empathy sounds cringe to me. I'd rather the doctor devoted their time to doing their job well instead of trying to make someone feel heard and validated - especially in an ER scenario where they are juggling critical patients. This tech can help with that.
> Of course not, but empathy in communication (not in action) is full of fluff. It almost requires it.
Empathy doesn't require fluff at all. (Think of all the short, poignant messages you've sent or received when someone is upset.)
The corporate need to not give ground on a complaint is where all that reassuring, repetitive, empty BS comes from.
> All "genuine" affirmation for the sake of empathy sounds cringe to me. I'd rather the doctor devoted their time to doing their job well instead of trying to make someone feel heard and validated - especially in an ER scenario where they are juggling critical patients. This tech can help with that.
Honestly with "for the sake of empathy" it sounds like you really don't place a high value on empathy (which is not unusual, or necessarily wrong). But if that is the case you're quite obviously not the right person to assess whether ChatGPT and the like can "help with that" in that context! :-)
>(Think of all the short, poignant messages you've sent or received when someone is upset.)
If it is short and not fluffy enough, it risks sounding dismissive. Those short messages generally pave way for a deeper conversation about the subject.
>Honestly with "for the sake of empathy" it sounds like you really don't place a high value on empathy (which is not unusual, or necessarily wrong).
Not really, I think empathy is important in the right setting, it is not the most important thing. Certainly not in the ER if the doctor is overworked and have lives at stake. If they have the bandwidth, sure. If not, can't blame them.
>But if that is the case you're quite obviously not the right person to assess whether ChatGPT and the like can "help with that" in that context! :-)
Disagree. I know how to sound empathetic for those that need it. Some people need words of affirmations and validation to be lifted. I am not one of them but I understand. It is not that hard. Modern LLMs are more than capable of creating the prose for that and more. There is a time and place for everything though. My empathy generally drives me to action and solving problems.
> I know how to sound empathetic for those that need it.
But that isn't actually empathy, and people can tell the difference.
> Some people need words of affirmations and validation to be lifted.
Not the words. The understanding and sharing that underpins the words.
My point is this: if you think empathy can be faked successfully, you simply aren't the right sort of person to decide whether the results of automated faking with an LLM are valuable to the listener.
Because people can very often tell when empathy is being faked. And when they do discover that empathy is being faked, you are not going to be easily forgiven.
Empathy implicitly involves exposing someone's feelings to the air, as it were, in order to identify that you understand and share them. So faked empathy is variously experienced as insulting, patronising, hurtful, derisive etc.
Using an LLM to create verbose fluffy fake empathy is going to stick out like a sore thumb.
If this isn't something you find easy to understand at a level of feeling, don't fake empathy, especially at volume. Stick to something very simple and an offer of contextually useful help.
> My empathy generally drives me to action and solving problems.
I think this is noble and valuable, and I would in your shoes stick to this. Offers of assistance are a kindness.
But you should never pretend to share someone's feelings if you don't share their feelings. Especially not in volume.
Depending on the situation, can make it actively harder as well. Do I spend the time to verbally fluff up the relative of a patient or go and tend to 5+ other patients that need critical care right now (the example in this article)? If the first one is expected of me, care suffers and the job is harder. I am not dismissing the needs the relative of patient has, don't get me wrong. But in an ER setting it rarely is the priority. If some tech makes it easier to more effectively give that need some additional bandwidth, that is a good thing.
This touches on something, the same reason that "I'm sorry" apologies should not be written by chatbots - for some things it's most important that a human being did something, more than what they did.
When a big mistake has happened and you need to apologize, it had better be coming from you yourself. If you think you can improve it by delegating it to a fancy autocorrect software, you're missing the point of why we apologize. Or if you think a bot is more 'empathetic' than a human paying attention, you've lost touch with what it means to be empathetic. A bot can't even feel, let alone feel your situation vicariously.
Many comments are saying it's helpful. But what happens when it's widespread, everyone crafting Hallmark type empathic messages to everyone. Or once everyone has bots, then most likely they will all be sent straight to spam or something. We will be inundated with such messaging.
"Chatgpt, write an email based on these bullet points!"
"Chatgpt, summarize this email into a few bullet points!"
Somehow, I doubt we'll notice. The reason we don't send the bullet points email today is that the receiver will think we're angry with them. In University the engineers all needed alcohol as a social lubricant, if chatgpt replaces it I expect we'll trade hangovers for higher electricity bills and carry on.
I look forward to AI-generated sappy Hallmark-like messages, because they're bound to be better written than the same types of messages most humans write today.
The sentiment will be no less fake. It'll just be better written.
We already do that during normal communication - people rarely say what they mean in the simplest way possible. We couch the message in all kinds of extraneous details. The tediousness of which is part of why some people want to use LLMs to generate messages for them in the first place.
Stripping that out might ironically reduce the need: Of people come to expect their messages to be automatically stripped of fluff, why include it in the first place?
It's exactly because most of us can write concisely that there is a perceived benefit to use ChatGPT and similar to expand a concise instruction into a longer message that meets the social expectations. Those social expectations are what often prevents us from sending those concise messages to people today.
I presume the issue is detecting when the meaning is intended or not. Like if a doctor uses it to communicate their care vs a salesman uses it to get a deal?
It's not actual empathy. It's writing that seems empathic.
The problem is that, exposed to enough writing that seems empathic but isn't, a person will learn that empathic writing is not sincere. We lose that as a society in the same way that if I check my email and see a "YOU'VE WON" subject I don't get excited, because I know it's fake.
Do we really want that? A world in which when you see an empathic message, the first thing you think of is that it's fake? I find the idea to be quite sad.
You say that as if it is known fact but I don't think that is a given at all.
The converse could be true: maybe being inundated with fake empathy is almost as good as being inundated with the real thing, ala placebo effect (or the studies which have shown that just forcing yourself to smile can actually improve your mood).
Empathy and compassion isn't in the words. It's in the intent behind them.
Faked empathy literally isn't empathy. (Fake compassion is a little more arguable; there are clearly situations where faked compassion is better than no compassion.)
Not everyone needs this or can do this on the same scale. I have really had to learn to find empathy and be unafraid of it, and I definitely have nearly zero tact (if you have no tact, you practise thinking good things).
But if you need to be able to practise empathy and compassion regularly, and you are incapable of it, find someone who can to that for you. Celebrate their ability to do it and you'll benefit from it yourself. Trade them something they don't do so well.
Helpful until someone reading your “writing” realises it’s AI and that you’re a robot who needs AI to write something with empathy or compassion. I’d much rather hear someone’s really voice in their writing than something false. It reminds me of people in certain places who are fake nice and fake positive 100% of the time. It’s draining to be around them.
It is also draining to be around people that always require feeling heard and validated or else they get crabby. Two sides to everything. The doc in this story probably does not require AI to write something compassionate. He simply does not have the bandwidth to do it while juggling critical patients and their family while they are not taking no for an answer and insisting on inappropriate treatment.
Isn't that why there are nurses, or maybe we need in-between positions of people who actually do the diagnosis, surgery, and those to explain them. Wait, those are nurses?
In a more scheduled hospital / care setting, yes. In ER care setting (the example in the article), it is a lot more chaotic. Disseminating care information to people further down / up the hierarchy takes time that they might not have.
That makes sense assuming engineers don't need to practice this skill very often. But doctors need to do fluffy, compassionate every shift.
Nothing about the story makes sense. Doctors and nurses say these kinds of things to patients every day (I'm an EMT, I've watched them do it and I do it myself). I'm seriously wondering if the relatives were responding to the fact that the clinician was visibly reading from a piece of paper and that somehow made it more 'official'.
PR talk and political speech, I agree. I wouldn't call this fabricated shit "empathy" or "compassion", for both, you need a little more then a LLM fine-tuned by SV lefties. In fact, whenever ChatGPT tried to be empathic, it feels very pathetic.
Ironically, your comment about the state of doctors is quite lacking in empathy for the doctors themselves.
Regardless of whether or not this article is a fake, I think you don't quite understand how overworked most doctors are, and how little time they have been given per patient to take care of what needs to be done.
Remember that the AMA has been limiting the supply of doctors in order to maintain high wages, and the working conditions for interns/residents amounts to hazing.
That’s not how that works. The AMA doesn’t control the number of physicians. 20+ years ago the AMA supported congress reducing the number of residencies because at the time there was a predicted glut of doctors.
However the AMA has reversed that position and supports increasing the number. Again though they don’t control the number of physicians.
Putting aside the question of if the linked is fake, I don't necessarily fault medical workers for handling their work dryly.
Those doctors and nurses and technicians have to deal with all manners of disgust, biohazards, sadness, and most significantly death each and every single day. I cannot in good faith demand them to treat everyone with empathy, that might as well be psychological torture for the medical workers.
This isn't to say they shouldn't be courteous, professional, and kind to their patients, that should go without saying.
I understand this point. Many doctors I see always being very social, and smiling in public even though they have seen horrors in the ER room, etc. I kind of get a bit of the callous, emotionally stable, and always cheery, positive nature but isn't too emotionally deep (think of the doctor in movies, where they give the bad news with a straight face) they need to have to be able to operate like a robot, ironically enough.
But, maybe instead of what this article proposes, we do the opposite, we make our doctors more empathetic, and leave the robot to do the grunt work, machinery, surgery, etc. A lot of comments are saying they find the empathy useful. I'm not sure if I will be able to tell if someone sends me a crafted message, but I don't like the idea of a message being sent by ChatGPT that is meant to artificially create empathy, to me it's fake empathy.
This is all theory, but I don't think robots creating fake empathy would resonate with humans once this is widespread. Maybe it creates a sort of disconnection, where people just blatantly avoid falling for text messages and paragraphs that sound empathetic.
There are some similarities too with the movie Big-Hero 6, and the Bayman robot that was created as a care robot. Initially, Hiro is very annoyed by it, because of it's rote "artificial empathy" voice and messages. But it's intelligence is what brings him around, when it understands things like contexts better.
> Maybe it creates a sort of disconnection, where people just blatantly avoid falling for text messages and paragraphs that sound empathetic.
That would be good to a degree. Right now, people are constantly falling for maliciously crafted empathetic/emotional messaging, coming from the mounths and from under the pens of journalists, salesmen, politicians, advertisers, pundits, and social media influencers.
In some sense, it's really saddening that people find issue with emotional text written by a bot, while they don't seem to find any problem with being constantly subjected to malicious emotional messages from aforementioned ill-intentioned parties.
ER nurses doing triage are some tough mother*ers.
You walk in, bleeding and pretty sure you will die soon,
The nurse takes one look at you and is not at all impressed.
"Yeah, take a number, and keep pressure on the wound while you wait"
or
"We are really busy right now. You will have to wait for many hours.
You dont really need a doctor. Just do ... ... ... and it will be fine. "
One thing I have learned in life is that if you are at the ER
and you have to wait a long time. You are lucky.
It is when you are rushed into the back right away you know that
Whatever has happened, it is severe, and you should be scared.
I'm going to go out on a limb and suggest that most people on HN have some experience of the patient family side of medical problems, some of them no doubt significant and regular.
We're not doubting the medical situation the ER doctor agrees sounds plausible, that GPT could have produced that response or that doctors finding it difficult to communicate with family members in intelligible and empathetic terms is a real problem.
But if you're the sort of person that would "melt into calm agreeability" when basically the same explanation was offered with generic corpspeak appended (i.e. they'd trust the doctor more if he was answering your question by reading from a script so non-specific and non-empathetic it finished up with "if you have any questions or concerns please contact the medical team"!), and would also be delighted to get the same script read out to you each time you asked a different staff member the same question, I think you're very much in the minority. Certainly in my limited experience, I can assure you that I was more reassured by the empathy levels of professionals completely misunderstanding my question and thinking I was threatening to make a complaint about their standard of care than I would have been if they pulled out an index card and read that they were all doing their best, the treatment is [boilerplate], please do not hesitate to contact medical team in case of any questions and we will reread index card to you.
At best, I suppose, my reaction might be to interpret the index card boilerplate repetition as a polite way of telling me to fuck off and not offer any followup questions. You could even make an argument that this is medically useful; a more specific version of "here's a leaflet" so they can get on with doing their job.
But as written where everybody loves the GPT boilerplate, the story reads like a classic of the LinkedIn/politician people miraculously came round to supporting me and everybody cheered genre.
My wife, an ER doctor, doesn’t believe that the doctor read the script to the patient. She thought that was completely absurd. She started laughing when she got to that part of the article.
For what it's worth, I'm a doctor and I find this story hard to believe. It does sound to me like wishful thinking by an AI hype guy.
In my experience, these sorts of situations happen when two things come together:
1. The patient's family has just enough medical knowledge to fall onto the wrong part of a Dunning-Kruger curve.
2. The family has certain beliefs about the medical establishment. Namely, that medical staff (especially doctors) are trained in medical school to have an inflexible way of thinking and that the doctors are dismissive of the family's proposed treatment because they are married to textbook thinking and rigidly following hospital protocols.
#1 happens all the time - maybe even the majority of the time - but it isn't enough to cause conflict on its own. #2 is the special sauce that makes the situation boil over.
What is the absolute last thing you want to do given #2? Probably something along the lines of handing every member of the staff a pre-written script to recite every time a family member asks questions. This will make them feel like they are being stonewalled, not like they are being listened to. This will confirm their fears about point #2, not ameliorate them.
I can't say I've ever seen someone print off a script quite like this before, but I have seen some doctors piss off patients/families by relying too much on pat phrases or repeatedly pointing to a particular clinical guideline or hospital policy as the rationale for their decision.
> The family has certain beliefs about the medical establishment.
Namely, that medical staff (especially doctors) are trained in medical school to have an inflexible way of thinking
This belief saved my mother's life.
We came with complainst of lung issies, for some reason doctors decided allergies are mosh likely but we wanted an X-ray. Months later they finally did a scan and found lung cancer.
I don't know if they are following a flow chart, but when dealing with NHS doctors I sometimes feel like I am talking to a bot.
They have the same routines, for exanple they will never ever check for Vitamin D defficiency, etc.
I cant imagine what will happen once real bots / chatgpt becomes widespread.c
Well as with most beliefs, sometimes it's wrong and sometimes it's right.
I don't know anything about your mother's case, and I don’t personally have any experience with the NHS, so I can't say if it would have been appropriate to get a CT earlier in her case. Maybe it would have been.
I do feel that we as doctors can be too rigid in our decision making sometimes, but my thoughts on this are quite nuanced and probably very different from the average layperson's. It's probably also a multi-page writeup, so I won't get into it here.
In general though, CT scans cause cancer in about 1 out of every 1000 cases [1], so if we ordered them on everyone who asked, we could very well cause more cancer than we diagnosed.
In the US, we already have system that is much more patient driven and much more aggressive on testing than most of western Europe, and we still have worse health outcomes.
I agree with your concerns about Chat GPT. Patients want face to face time and I think they deserve that. Unfortunately that's low hanging fruit for C-suite executives to pick at when cutting costs.
I don’t think that first one is an ER doctor (having ER experience is not ER doctor - at least until COVID every medical student had 4 weeks of “experience”) - you’d just say you’re an ED physician.
But more pertinent, on reading it and a reply I think what is being deemed realistic is the scenario (ie patients family arguing treatment) but that’s not terribly controversial, the followup comment even admits that they would not feel good if they were presented with an AI readout: “If they are not processing the information then you have to allow for that, but I do think some relatives would react very badly if given a printout from an AI to explain a situation”.
So I don’t think this makes your case as strongly as you think.
The other one is hearsay agreeing with the “premise” which is a charitable way of saying the story itself is likely bullshit - and in the end disputes that ChatGPT is useful here (go actually read the comment).
For what it’s worth I think the meat of the story (ie the use of ChatGPT) is farcical.
One, I’ve been on the other side even before medical training, this idea that no doctor could imagine what it’s like to be on the other side is stupid. When I had questions about my dying family member prior to any medical training if the response was to pull out a canned script of what are clearly corp speak platitudes, I would have requested a different provider or a transfer.
And finally, pulmonary edema. Good example for an uninformed audience that doesn’t know better, but this is waaay too common - an ED doc in a medium busy service can easily see 10 presentations a day. This is so basic that this type of conversation happens many many times over (commonly multiple times a day for this same presentation). Answering family questions and addressing objections over routine presentations is something that an attending should have been doing for nearly a decade at least.
In academic medicine we talk about how to do this better but I see no evidence this ChatGPT nonsense presented has anything new to add.
I believe it. I've used chatgpt for inspiration on how to craft difficult messages with empathy. I've been managing people for a long time and could've done it independently, but it's good to get help. It can be tough to express empathy when exhausted after a long day of urgent demands and responsibilities.
This is no different than asking chatgpt to draft code that I could otherwise write from scratch. It's like having an assistant.
I get the approach of stylistic advice. I think that people in this thread have two separate conversations, one about an empathetic style and one about empathy.
Empathy means sharing (and often uncomfortable) emotion with another person.
If what the text represents is true, i.e. the model helped you express you really feel, but you're too tired to write it, that's sounds like a net positive.
Although in my experience if I'm too tired to sound empathetic on my own, I'd rather avoid messaging people at all or put whatever message I have for them in context, e.g.: "Hey, It's been a long day and I'm knackered, so I'll be brief: ...". People appreciate being direct and honesty, but I don't work much in places like large corps any more where communication is already very formalised and "templated empathy" just follows the existing practices. (not saying that you are, just giving an example here).
Now it feels kind of neat, and maybe works when others are reading well crafted messages. What happens when everyone has a bot, and a bot is talking to a bot, because I for one, don't want to be sent or fall for emotionally manipulative messages, or that try to sound empathetic, maybe even are empathetic.
When this type of messaging is so widespread, I wonder what will happen. I think people will ignore most of this type of messaging. Maybe we'll evolve to not be emotional anymore, when emotions are everywhere? I don't know, but an interesting hypothetical.
The person sending the message is accountable for its content, no matter how it was authored.
Maybe I want to be tactful around a delicate issue, but am struggling to find the right words. An LLM can help me. Alternatively, someone who wants to be manipulative and deceitful might also optimize their message using an LLM.
The tool is of course not inherently good or evil. But one can use the tool for different moral outcomes.
But we can quite easily feel the difference between these ChatGPT messages which are too much and a genuine message.
We know corporate BS when we see it.
And in the example in this post I think the doctor said more or less the same thing to the relatives as the ChatGPT message, the biggest difference was that with the ChatGPT message he sat down with everyone around and reading it out load. Had he stopped and given a few seconds/minutes with all relatives around him and talked with his own words he would reached the same effect.
The other effect is that it was written down. Written texts seems more authoritative than words that just come out of a mouth. So reading from a paper sounds more fact like than just speaking. He could have taken any paper. Or have prepared papers for the most common questions.
Will it really be a difference? Corporate speak is already a thing, people already feign emotions and sympathy. Sure, this is more, but it feels to me like more of the same; I don't think it even magnifies the problem that much.
I don’t see how these are different from any of the usual platitudes everyone commonly uses. I’d worry about everyone sounding the same but nevermind because we all use the same turns of phrase already.
I only believe it as a cynical marketing piece from an executive at a health-focused VC fund with several AI companies.
This could spectacularly have backfired: "<expletives>, what kind of doctor are you? Having to use ChatGPT to treat my mother. Do you even have a medical degree?"
When communicating difficult news, you have to be concise. The ChatGPT-generated response is very fluffy.
ChatGPT can also hallucinate, so you must be extremely careful when using it at the end of your shift.
Fake or not, people are incorporating ChatGPT into their work. We had a pipeline break today and when we looked at the PR that introduced the change it was in the PR notes that the tooling configuration was derived from a ChatGPT prompt. What do you even say or do about that?
I've been embarrassed by a Copilot code snippet that I proofread to seem correct, but was incorrect due to some subtle comparison logic.
I can only imagine the risks in professions that aren't dealing with logic that has a very finite list of options or where the answers AI produce require researched effort to validate.
It's probably worse in law if you're given a case reference as you're professionally supposed to read the document in substance to not be muddled by selective quoting, then if it's from a foreign jurisdiction you have to also check if it's applicable to you.
> I can only imagine the risks in professions that aren't dealing with logic that has a very finite list of options or where the answers AI produce require researched effort to validate
This can be done the same way as it’s done in the non ai-assisted process. Mistakes are made then the trained professionals correct the course of action. The problem is with people dealing and selling the problem like it’s a simple logical procedure. Then the layman will think that since a machine crafted the answer, and since machines are logical, the answer must be right.
Some software mistakes can cost millions ... not saying it happens all the time but it can happen. We SHOULD try to be as accurate and bug free as we can in this business its no joke.
Yeah, sometimes its just an html div that looks funny (that can also cost the company depending on how embarrassing it looked and how much traffic was impacted). But sometimes its backend logic that screws up important data or who knows what with very real reputation or financial consequences to the business.
This is a good question. There are no tests for tooling but the tooling can result in pipeline failures. Theoretically the pipeline should've failed outright when they went to merge. Instead it failed on specific conditions the developer didn't account for and weren't familiar with because the configuration wasn't something they were familiar with because they weren't building it from docs.
Some specialties attract different personalities. If you’ve ever been in a hospital setting around a lot of neuro residents, many of them will present… oddly. It’s a feature. Likewise, a lot of ER docs are kinda adrenaline people who are great at triage and quickly stabilizing you, but maybe not so much good at relating with people. That’s a feature — you don’t need a soft touch dealing with trauma.
If ChatGPT can help someone perform better in their area of weakness, great. If it can help super smart people get information out of their heads, great.
Have worked with many poor communicators and outright idiots but there is virtually no one graduating a decent medical school with native speaker skills that cant explain in simple terms why giving more fluids to someone that is overloaded with fluid is a bad idea (oh and without using any jargon like edema or diuresis, another thing missing in this whole farce, any 1st year let alone decent attending will explain clearly and simply what is being done. you’re not giving fluids then what the fuck is your plan doctor?).
I’ve yet to meet the combo of poor communicator/idiot that would believe reading a canned “empathy” script would somehow be helpful.
Seriously is there anyone here that would feel confident in their doctor if they came reading a plan off a sheet like a teleprompter?
This guy is trying too hard and assumes most of his audience won’t realize how idiotic this sounds.
I disagree with the first part, mostly because I believe doctor-speak is a skill medical professionals use to deal with patients which in their desperation, think they know more than they do.
Yet I fully agree with the second paragraph. The idea of the teleprompter doctor working is so absurd it would be rejected immediately if not because chatgpt hype.
The teleprompter bit makes no sense. Do you feel less confident in your doctor after learning that any kind of serious treatment usually involves a checklist?
Following a script isn't a sign of lacking skills - it's a sign of being wise enough to recognize human limitations. Medical care isn't improv comedy.
Looking at notes or a paper is one thing and checklists and timeouts serve an important role in safety, but being unable to hold a conversation about a fundamental concept without a script is quite another, and it is lacking a skill by definition.
Really depends on the situation. I have no doubts a typical doctor could easily ELI5[0] you anything about the procedures or treatments they're administering - once well fed, well rested, and somewhat relaxed. But if you're catching them as they're overworked and asking them to context-switch on the spot, well... I'd expect it work just as well as it would with software engineers. That is, if I was subject to random ELI5 requests during a busy work period, you'd bet I'd start preparing notes up front (and probably put them on a Wiki, and then give the people asking me a link to that wiki, and politely tell them to RTFM and GTFO).
--
[0] - "Explain Like I'm 5 [years old]". It's Reddit-speak, but describes the concept quite well, and there isn't a proper single word alternative for it.
First, it really depends on the speciality and the emphasis on the clinical practice within that specialty.
But I'll address "I'd expect it work just as well as it would with software engineers."
This is not a great comparison. Orally presenting patients to peers and explanation to non-experts is a core skill of medical training and fundamental to clinical practice. Understanding the essential fundamentals of a handful of extremely common conditions that can be explained in simple terms is something that the majority of doctors will have to do thousands of times throughout residency possibly at the tail end of a 24.
[I'm actually not in favor of it - because it often borders or is hazing with little verified educational value - but a historically common and still present practice in some places is making the intern present critically ill patients to the day team and attending after they are coming off of a 24 hour shift without any notes. Speaking of context switches, they will often have 5 or more patients like this. Even if not this extreme, the point is, doctor and software engineer training have little overlap (eg why numerous fools trying to make the next stack overflow for MDs have never succeeded).
As for this example, I can't overemphasize how common pulmonary edema and volume overload are presenting findings in the ED. This is like an experienced programmer going to ChatGPT to explain the addition operator in Javascript, which you could do, but would it be unfair to expect someone to come up with an explanation on the spot? Maybe, but then probably medicine is not a career for you. It does emphasize a different set of skill sets. And yes picked that example on purpose, because maybe one doesn't remember all the stupid implicit type conversion rules but one can still come up with a basic explanation.
As a doctor with a mixed socio-economic patient population if I just say "your family member has pulmonary edema" I'm already getting a fucking blank stare much of the time, especially if it is new. I might as well just tell them they have dropsy. I can even tailor patient education to the audience as well after 10 years of doing this for a living, maybe someone wants an ELI6 (it is often the tech people, some of them are alright - though that crowds tends to often want the iamverysmart explanation - or like many on this site just explain why I'm an idiot that knows their field less than they do).
As for specialty, people go into pathology and radiology for instance to avoid all this once done with medical school, but that is only a portion of doctors. ELI5 is relative too - consulting subspecialists and radiologists must ELI5 to their more generalist colleagues.
> asking them to context-switch on the spot
I mean this is part and parcel of hospital medicine. 5 years out of residency one can easily run through a list and handoff 20 patients with a few notes on a single sheet of paper, whereas a medical student will often have an awkward folding clipboard with a ream of notes for their 3 patients.
> If I was subject to random ELI5 requests during a busy work period, you'd bet I'd start preparing notes up front
I need a few notes to remind me of a patient's clinical status in reference to their core problems. I absolutely do not need notes on how to converse with patient's which are 99% of the time the same things over and over. Despite what's on TV, medicine is overwhelmingly routine - the drama is usually the social issues.
> and probably put them on a Wiki, and then give the people asking me a link to that wiki, and politely tell them to RTFM and GTFO
Yeah, the starting point of this approach for the typical doctor and the typical techie are unsurprisingly starkly different. Telling patients to "RTFM" isn't particular winning -- and there is a whole field of academic study simply related to Patient Education.
I’m calling bullshit on the scenario where a distressed family member which is low key claiming you’re mishandling the patient is suddenly calmed when confronted by a doctor that is reading a script filled with claims of empathy and care.
Reading the story really belies the real issue - he’s an important doctor guy who is too busy to help a couple of 70 somethings understand what he knows. I say that both in the cyclical “what an asshole” sense and in a real sense - there’s a guy having a stroke 4 bays over. That’s a sort of paradox of medicine.
I think the canned script will be handed to a less important PA or NP who will use it as a conversation guide.
It only sounds idiotic with sufficient levels of cynicism coupled with a lack of empathy.
Two years ago I rushed to the ER suffering from pulmonary edema caused by misdiagnosed endocarditis which required emergency open-heart surgery. The most qualified of the surgeons were very terse and very busy with, you know, prying people’s rib cages open and such. Do the math and think about what else the ER could have been dealing with at that time…
Alright, I'm going to miss the point entirely here, but I'm swinging any way: if I recall correctly, Kumar had basically studied and become a doctor, but was scared to take the final test due to his fathers pressure yada yada something like that, and it wasn't through sheer luck he saved the patient.
Yes well summarized. I understood this too, and replied in another comment, which is how doctors many times come across as callous and really like robotic people, but it's bc they have to be ready to take on another case. The people who are sensitive don't stay for long, or make it all the way.
One one hand I feel, many developed that way as a means of adjusting to their profession. I'm not sure if they necessarily are the happiest versions of themselves if they are like that. What I mean is if they could choose to be more empathic without suffering would they? I'm thinking they would, as it's a human thing to feel more. Reducing the suffering is another point to solve. Essentially, I don't know how comforting AI's would be as opposed to another human.
In that case, how can we make robots/AI do the dirty work, and make humans do the more empathic work. Maybe have separate doctors that just do the communicating part, while doctors+specialty AI robots(in the short term transition until it's fully autonomous robots?) do the surgeries, etc that require steady, stable minds, and lower empathy.
I know some doctors are full of baloney but c'mon. I really hope maybe that we create better doctors that have a lot of empathy, and are able to wield technology to make themselves 10x better like we're doing with programming. But, this is not the way.
Edit: I realized its fake...I think, it has to be. It doesn't make sense, like others mention. You're not reading a 5 para speech to a patient's family while they are being treated. And also, in the article, they say to double check and verify what is being said. You better proofread that too! Or it might invent something absolutely crazy, and you may be liable.