It's fascinating to see how AI tech is being applied to transforming healthcare in China. It's now used at over a 100 hospitals for pathology, radiology, and diagnostics, easing medical staff workload. This technology is helping address workforce shortages and ensures up-to-date practices in China, while it's used to make inane chat bots in the west.
I don't know how you trust such technology when it's proved that produces false answers, hallucinate many of them and this is inherent to the AI models and is not fixable.
That without taking in account the natural resources wasted (not used because they can't be recicled) to produce such answers.
Is like you trust your life on a doctor diagnosis that is all day high on extasis, cocaine, beer and whisky.
And he piss on the only source of water that you have available.
@DBG3D @yogthos this is why I don't like the blanket use of "AI" to describe a very wide range of technologies from LLMs (which do all those things you say) through to relatively robust image classification systems (which it seems mostly are what's being talked about here). If course, this suits the grifters in the industry just fine...
A "AI" will simply classify such images because they look similar to the ones used to train it. Is still a cut, pasye and compare algorithm.
Miss a type of "tumor", for example, on the training data and it will fail identifying it.
So they are NOT infallible.
Identify illness and their symptoms takes a doctor years of expertise. Not to mention if the patient has more than a single illness.
@DBG3D @yogthos I don't think I ever said that they were infallible, so I will thank you to not put words in my mouth. They *are* useful tools to help doctors spot indicators of illness more rapidly - and even doctors themselves will miss features, misdiagnose illness and so on , so the actual comparison should be the *degrees* of fallibility between the two.
@DBG3D @aoanla the fact of the matter is that this is what a human doctor does as well, and they're not infallible either. Doctors make mistakes all the time.
The years of experience a doctor gains is years of learning to recognize patterns. Except the sample size they work with is much smaller.
However, key part is that the doctor is still the final decision maker, it's just that they know have a broader perspective of the problem.
@yogthos @DBG3D right, precisely. Plenty of doctors *also* don't recognise rare diseases, because they either didn't encounter them or forgot them since they graduated because they didn't see any cases for 2 decades! As you say, the important part is that we have humans in the loop - but that absolutely doesn't mean that we can't use tools to assist them.
@DBG3D it's not the final decision maker, it acts as an assistant to a human making a diagnosis. This is very good use for the technology, because doctors often miss things because either they're tired, or they didn't pay attention to a particular symptom, or lack the training to recognize it. Having a system like this draw their attention to a potential problem is a net positive here.
That will be brilliant if the "artificial inteligence" were infallible and it doesn't not have bugs.
At the end of the day is simply software written by humans.
@DBG3D again, humans are not infallible and make mistakes all the time. What you should be asking is whether using these tools statistically improves outcomes or not. The video description provides citations illustrating that the outcomes in China have improved as a result.