By Clif High - February 23, 2023
Desperately Seeking Diagnosis!
Intense, Deep, Sometimes Painful, Fluttering!
Research shows that most, as in over 90% of medical diagnoses are WRONG!
Wrong in whole, or in part. A staggering 60+% are wrong completely. Totally wrong disease such as liver malfunctioning when it is really colon cancer. That sort of wrong. Treating the wrong organ or system. Completely wrong.
I suffered such wrong diagnosis with my colon cancer for 40+ years. The cancer was so deep and affecting of my life that it made me into a vegetarian for over 35 years due to the inability to properly digest foods. For four hard years, I was vegan. All due to a wrongly diagnosed colon cancer growing deep within me.
In the last 15 years of that progress of the colon cancer to kill me, I suffered a condition arising from the cancer that is termed by the medicos as ‘intussusception’ in which part of the bowel is telescoped into another part due to the effects of a cancer on the outside of the intestinal wall. It produces huge amounts of incredible, body and mind twisting pain, followed by death that is an actual relief. I was happy to die. At that time, this world could offer me nothing.
I had what they called ‘smooth wall cancer’ that did not show in any of the many hundreds of tests that I had endured over 40 years trying to find out what was wrong with my body. This ‘smooth wall’ wording is how the docs & techs describe it when the do locate such a cancer by colonoscopy. They see just a ‘smooth wall bulge’ within the field of view of the scope. No sign of the cancer at all as it has gone through the intestinal wall and is growing on the other side of it.
During those 40+ years of seeking understanding that was at least effective, if not accurate, I tried to communicate what was happening to my body, and the sensations that my brain received, desperately seeking a diagnosis. I used, perhaps, thousands of different words, and phrases in this attempt with actually over 100 different doctors.
It did not work, and I died on July 13, 2018 of colon cancer undetected until that day.
The colossal irony that I had been working deeply with language since the early 1990s was not lost on me as I left my body. In that instant, I visualized the ‘why’ of my death as a ‘failure to communicate’, and I saw a way to correct it.
Too late for me, though. I was leaving my body as the thought came up that I needed to invent a translation utility that could work between patients and medical personnel. It dawned on me that the doctors and technicians did NOT speak my language as I lay there dying, and listening to them describe the process to each other. Virtually no English was involved. It was really a quite shocking revelation that I had been so stupid to not grok the nature of the issue all these years.
My language model developed in the 1990s for the base for predictive linguistics modeling has provided many interesting understandings about how the Large Language Models (LLMs) may be used to augment humans. None of which occurred to me until the day of my death.
Oh, I had had a few noodles, mere fractions of notions float across my mind over the decades of working the predictive linguistic aspects of the LLM that I had developed, BUT, turn it to my own situation? Nope, was not that smart. I was too focused on chasing predictive linguistics and the future to my grave to realize it need not be that way.
My appreciation of the failures of the medical ‘system’ are accurate, acute, and sharp. Dying does that to you, it crystallizes your thinking at a deep level that nothing else will provide. The medical system is broken from the very inception of the process as a central component, which is cliché, that is ‘doctor speak’, is not, and never, acknowledged as THE primary barrier to effective medical care.
We train doctors for years, sometimes for decades with specialties, to speak another language. It makes sense that communication problems arise. We don’t think to back-fill the process with translation utilities.
Yes, nursing staff USED to occupy that role, but as medicine has become more technical, more test, result, and metric driven over the decades, the nursing staff, through training and osmosis, loses the ability to speak ‘normie’ with the creep of their language into ‘medical speak’.
My idea now, after having explored some of the capacities of the ChatGPT, is to use this AI LLM to form the basis of that “normie to doctor” translation utility.
We would connect ChatGPT up to medical dictionaries, then very diverse symptom descriptions from patients taken from as many sources as we could locate. The idea being a corpus of ‘normie speak while ill’ such that the affects of the illness on the mind, and language choices made by the patient would be taken into account by the AI in its translation to the doctors.
For the last 7 years of the progress of the colon cancer, I had had the ‘intussusception’ developing. It was painful, and ultimately led to bleeding, but never a diagnosis until my last day on this planet.
What I am wanting to create is a tool, a translation utility that a patient could interact with, using their own words and expressions, that would have translated “intense, sometimes painful, internal, just under the abdomen, fluttering” and determine that it is even POSSIBLE that it was ‘intussusception’.
I had had the sensations, and pain, and disability, of ‘intussusception’ for 7 years before the word ever arose in ANY discussions with any physician, or technician, and I had spoken with hundreds.
I have interacted with the ChatGPT and have decided it can do this work. It can be used as the bridge to connect ‘normie patient’ speech to doctor/technician brains.
ChatGPT agrees with me that this can be done.
I am writing this essay to encourage others to use ChatGPT for just such a tool. The concepts are easily understood by the programmers, and they could also, relatively easily, provide a ‘tuned’ version that would be explicitly taught to translate descriptions of body sensations into diagnosis possibilities.
Another benefit to such a system is that it augments the doctor’s time by moving a big part of the diagnosis procedure over to interacting with the AI.
I have, and am, exploring this idea with ChatGPT, but thought to get others clued in to the possibility that the work not be limited to my efforts.
ChatGPT also agrees that this is an effective strategy.