I agree that technology can get there over time, even if it is not there today. I am not a doctor so I could be completely wrong. The doctors do a pattern matching and apply if symptom one and symptom two, then disease one and AI will have much more knowledge than a single doctor in most scenarios. In the scenario where there is not a lot of data, a human doctor will have an edge, or in cases where AI does not give importance to a symptom, a human doctor’s experience tells him otherwise. However, we need to address ethical and other questions. I am just listing a few:
How would we handle a medical error situation? Would we still blame the doctor or AI? Are most doctors not going to start trusting AI over their judgment because they would think AI has more knowledge than them?
A great example comes from Kasparov playing against Deep Blue:
“He later said he was again riled by a move the computer made that was so surprising, so un-machine-like, that he was sure the IBM team had cheated. What it may have been, in fact, was a glitch in Deep Blue’s programming: Faced with too many options and no clear preference, the computer chose a move at random. According to Wired, the move that threw Kasparov off his game and changed the momentum of the match was not a feature, but a bug.”
Absolutely agree with you that AI and healthcare is riddled with ethical questions and considerations. A relatively harmless first step would be to see if it is possible to improve health outcomes by supporting doctors with AI, rather than replacing doctors (or any medical expert) with AI. Fully agree also that in scenarios where there’s a lot of data AI is likely to have an edge of humans.
My experience with providing technology to thousands and thousands of people in organizations over the years is most people build dependency. Then, they blame technology for a mistake even when they have been told to verify. Most people look for shortcuts, and quite a few are lazy, leading to overreliance on something else. I fear that many doctors will do the same, which will be disastrous for patients. So, the bigger question is, how do we ensure it does not happen if we go with AI assist mode?
Yes, I can see how overreliance is a real risk. I’m not sure if I agree that “most people will look for shortcuts”. But some definitely will. In general, education around the capabilities and the limitations is crucial.
Job seekers, frustrated with corporate hiring software, are using artificial intelligence to craft cover letters and résumés in seconds, and deploying new automated bots to robo-apply for hundreds of jobs in just a few clicks. In response, companies are deploying more bots of their own to sort through the oceans of applications.
The result: a bot versus bot war that’s leaving both applicants and employers irritated and has made the chances of landing an interview, much less a job, even slimmer than before.
I agree that technology can get there over time, even if it is not there today. I am not a doctor so I could be completely wrong. The doctors do a pattern matching and apply if symptom one and symptom two, then disease one and AI will have much more knowledge than a single doctor in most scenarios. In the scenario where there is not a lot of data, a human doctor will have an edge, or in cases where AI does not give importance to a symptom, a human doctor’s experience tells him otherwise. However, we need to address ethical and other questions. I am just listing a few:
How would we handle a medical error situation? Would we still blame the doctor or AI? Are most doctors not going to start trusting AI over their judgment because they would think AI has more knowledge than them?
A great example comes from Kasparov playing against Deep Blue:
“He later said he was again riled by a move the computer made that was so surprising, so un-machine-like, that he was sure the IBM team had cheated. What it may have been, in fact, was a glitch in Deep Blue’s programming: Faced with too many options and no clear preference, the computer chose a move at random. According to Wired, the move that threw Kasparov off his game and changed the momentum of the match was not a feature, but a bug.”
Absolutely agree with you that AI and healthcare is riddled with ethical questions and considerations. A relatively harmless first step would be to see if it is possible to improve health outcomes by supporting doctors with AI, rather than replacing doctors (or any medical expert) with AI. Fully agree also that in scenarios where there’s a lot of data AI is likely to have an edge of humans.
My experience with providing technology to thousands and thousands of people in organizations over the years is most people build dependency. Then, they blame technology for a mistake even when they have been told to verify. Most people look for shortcuts, and quite a few are lazy, leading to overreliance on something else. I fear that many doctors will do the same, which will be disastrous for patients. So, the bigger question is, how do we ensure it does not happen if we go with AI assist mode?
Yes, I can see how overreliance is a real risk. I’m not sure if I agree that “most people will look for shortcuts”. But some definitely will. In general, education around the capabilities and the limitations is crucial.
Here is an example of shortcut from the WSJ:
Job seekers, frustrated with corporate hiring software, are using artificial intelligence to craft cover letters and résumés in seconds, and deploying new automated bots to robo-apply for hundreds of jobs in just a few clicks. In response, companies are deploying more bots of their own to sort through the oceans of applications.
The result: a bot versus bot war that’s leaving both applicants and employers irritated and has made the chances of landing an interview, much less a job, even slimmer than before.