This is a big flashy headline that isn’t as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.
“Diagnosis a 1 in 100,000 condition in seconds” is an absolutely meaningless statement.
What was the condition? Does it present with vague and difficult to assess symptoms or does it have a pathognomonic clinical sign that identifies it immediately, or is it somewhere in between? Did the AI diagnose it correctly, if so was it on the first try? Is it repeatable, could it diagnose it again? How prone is it to false positives, can we be sure it wouldn’t diagnose a healthy patient or a patient with a similarly presenting problem? What about false negatives? It caught it this time, do we know how many times it missed it? What about a treatment plan? Does it know how best to treat it and can it work to personalize a treatment to fit that patient specifically with any comorbidities or conflicting medications taken into account? When planning treatments does it stick strictly the drug label or does it factor in published research on dosing?
This is a big flashy headline that isn’t as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.
While I also agree it is less than the hype. There are already people that are just concerned about moving quickly up the ladder and/or lazy that are just using GPT on the low and taking credit (with or without first checking any of it). I read about a law firm that was found to have used GPT for a case and were found out mainly due to legal case citations that they submitted were just made up by GPT and couldn’t be found to have ever happened. They then claimed that they weren’t aware that the AI would provide fake information as it sounded real enough.
Not to mention all the tech companies that are having to tell workers to stop uploading code or other information for the AI to work on. Given how bad the lack of fucks given by so many docs with pill mills and opioids. I am more than willing to believe there are already docs all over that are using GPT or any of the others.
I can attest to many docs/nurses not giving any fucks even when just trying to get correct diagnostic codes for the lab company I worked for years ago to simply bill insurance. We had to get a specific code and not a general code (like they can’t just use something like 264 as it is a general code for “Vitamin A deficiency” and would need to state like 264.1 “With conjunctival xerosis and Bitot’s spot” to specify which kind of Vitamin A deficiency). I would have to call about them missing or not being specific. And I got a shockingly high amount of docs that had ordered the freaking tests just tell me to just use “whatever made sense.” Our already fucked medical services are gonna get much worse.
I mean, sure. I know people who have used ChatGTP to write their discharges. It’ll definitely be tried as a crutch by the lazy in the short term, but I think it’ll end up being used as a actual tool in the long term (not just in medicine, but in a wide variety of fields). However, I also think that’s an entirely different discussion than the one this article presents. I think the conversation of how AI can be used as a tool to assist existing and future professionals is an entirely different conversation than wether or not AI is going to replace any given profession. I also think it’s a wildly more productive conversation because I don’t believe there are many professions that can be completely phased out by AI.
I also think that the point you raised about codes is another entirely different discussion that could be had about the pitfalls of modern day medicine. I’m actually going to argue hard in favor of the doctors who told you to “choose whatever code is most appropriate” because in my experience and opinion, knowing specific billing codes is wildly outside the scope of knowledge needed and expected for a doctor. Their job should be first and foremost to treat their patients. Navigating the unnecessarily complicated and red tape filled maze that is billing and insurance codes is not only an unrelated skill set, but also a necessity brought out of a flawed and predatory system built by those who seek to profit at the cost of healthcare (i.e. insurance companies) rather than those who seek to make a living by providing healthcare.
I mostly agree with you on the first paragraph. Though I would say that I am not so sure they won’t try to phase out plenty of professionals. All of these companies are freaking the fuck out and trying to just rush shit out at rates that are beyond problematic. All the VC’s and major tech companies are chasing the dragon of money and ignoring all the real issues that comes with normies thinking that shit is magic and accurate. Though I am also blaming the for-profit media of all types that are all about hyper-attention grabbing titles. Which does fill the gaps of normies thinking things are much further along than they are. So even if the devs make a point to say that things are not even fully in betas, we are seeing shit being pushed out like it is full release. We already saw how fast internet folks were able to turn old chatbots into outright Nazi sympathizers. There are already well equipped blackhats out there that are trying to take some of these models and remove protections. Though it is very fun to see the ways that how the more prankster folks are bypassing some things with just wording shit differently.
I also agree that I want the healthcare system and insurance needs to be burned down and just moved to tax funded with lives being more important than any money. But the codes I was speaking of wasn’t billing codes. They are actual medical diagnostic codes. So it does kind of matter that things be correct in that element. As it implies that their diagnoses for patients are not based on knowing specifics. One wrong code could change what is being looked for and could be bad in wasting time. Though I will stress that it is likely to not matter for most common tests. So I yield to that in those situations.
Overall I do want to again say that I think we may agree more than my initial reply might sound (or even this one). I just think that we can’t keep the constant hype-trains running so hard all the time as if things are going to work. We are being charged more for less finished products. And we are seeing the beginnings of mass reductions in workforces as the corpo leaders just think that we aren’t needed anymore. But that could work in favor of a worker’s revolution if leftists really start going hard for catching those impacted before they fall into reactionary hands and fascism takes over again. But I don’t wish for so many to suffer more than they have already.
I never said it can’t be useful, just that it isn’t very useful right now and it certainly isn’t going to replace doctors any time soon. I said it in another comment that eventually I think AI will be a tool that could be used to help doctors.
This is a big flashy headline that isn’t as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.
“Diagnosis a 1 in 100,000 condition in seconds” is an absolutely meaningless statement.
What was the condition? Does it present with vague and difficult to assess symptoms or does it have a pathognomonic clinical sign that identifies it immediately, or is it somewhere in between? Did the AI diagnose it correctly, if so was it on the first try? Is it repeatable, could it diagnose it again? How prone is it to false positives, can we be sure it wouldn’t diagnose a healthy patient or a patient with a similarly presenting problem? What about false negatives? It caught it this time, do we know how many times it missed it? What about a treatment plan? Does it know how best to treat it and can it work to personalize a treatment to fit that patient specifically with any comorbidities or conflicting medications taken into account? When planning treatments does it stick strictly the drug label or does it factor in published research on dosing?
While I also agree it is less than the hype. There are already people that are just concerned about moving quickly up the ladder and/or lazy that are just using GPT on the low and taking credit (with or without first checking any of it). I read about a law firm that was found to have used GPT for a case and were found out mainly due to legal case citations that they submitted were just made up by GPT and couldn’t be found to have ever happened. They then claimed that they weren’t aware that the AI would provide fake information as it sounded real enough.
Not to mention all the tech companies that are having to tell workers to stop uploading code or other information for the AI to work on. Given how bad the lack of fucks given by so many docs with pill mills and opioids. I am more than willing to believe there are already docs all over that are using GPT or any of the others.
I can attest to many docs/nurses not giving any fucks even when just trying to get correct diagnostic codes for the lab company I worked for years ago to simply bill insurance. We had to get a specific code and not a general code (like they can’t just use something like 264 as it is a general code for “Vitamin A deficiency” and would need to state like 264.1 “With conjunctival xerosis and Bitot’s spot” to specify which kind of Vitamin A deficiency). I would have to call about them missing or not being specific. And I got a shockingly high amount of docs that had ordered the freaking tests just tell me to just use “whatever made sense.” Our already fucked medical services are gonna get much worse.
I mean, sure. I know people who have used ChatGTP to write their discharges. It’ll definitely be tried as a crutch by the lazy in the short term, but I think it’ll end up being used as a actual tool in the long term (not just in medicine, but in a wide variety of fields). However, I also think that’s an entirely different discussion than the one this article presents. I think the conversation of how AI can be used as a tool to assist existing and future professionals is an entirely different conversation than wether or not AI is going to replace any given profession. I also think it’s a wildly more productive conversation because I don’t believe there are many professions that can be completely phased out by AI.
I also think that the point you raised about codes is another entirely different discussion that could be had about the pitfalls of modern day medicine. I’m actually going to argue hard in favor of the doctors who told you to “choose whatever code is most appropriate” because in my experience and opinion, knowing specific billing codes is wildly outside the scope of knowledge needed and expected for a doctor. Their job should be first and foremost to treat their patients. Navigating the unnecessarily complicated and red tape filled maze that is billing and insurance codes is not only an unrelated skill set, but also a necessity brought out of a flawed and predatory system built by those who seek to profit at the cost of healthcare (i.e. insurance companies) rather than those who seek to make a living by providing healthcare.
I mostly agree with you on the first paragraph. Though I would say that I am not so sure they won’t try to phase out plenty of professionals. All of these companies are freaking the fuck out and trying to just rush shit out at rates that are beyond problematic. All the VC’s and major tech companies are chasing the dragon of money and ignoring all the real issues that comes with normies thinking that shit is magic and accurate. Though I am also blaming the for-profit media of all types that are all about hyper-attention grabbing titles. Which does fill the gaps of normies thinking things are much further along than they are. So even if the devs make a point to say that things are not even fully in betas, we are seeing shit being pushed out like it is full release. We already saw how fast internet folks were able to turn old chatbots into outright Nazi sympathizers. There are already well equipped blackhats out there that are trying to take some of these models and remove protections. Though it is very fun to see the ways that how the more prankster folks are bypassing some things with just wording shit differently.
I also agree that I want the healthcare system and insurance needs to be burned down and just moved to tax funded with lives being more important than any money. But the codes I was speaking of wasn’t billing codes. They are actual medical diagnostic codes. So it does kind of matter that things be correct in that element. As it implies that their diagnoses for patients are not based on knowing specifics. One wrong code could change what is being looked for and could be bad in wasting time. Though I will stress that it is likely to not matter for most common tests. So I yield to that in those situations.
Overall I do want to again say that I think we may agree more than my initial reply might sound (or even this one). I just think that we can’t keep the constant hype-trains running so hard all the time as if things are going to work. We are being charged more for less finished products. And we are seeing the beginnings of mass reductions in workforces as the corpo leaders just think that we aren’t needed anymore. But that could work in favor of a worker’s revolution if leftists really start going hard for catching those impacted before they fall into reactionary hands and fascism takes over again. But I don’t wish for so many to suffer more than they have already.
Why does it have to be perfect to be useful? I could just throw ideas at the real doctor, who then decides what is actually the most reasonable thing.
I never said it can’t be useful, just that it isn’t very useful right now and it certainly isn’t going to replace doctors any time soon. I said it in another comment that eventually I think AI will be a tool that could be used to help doctors.