It’s important to remember that a lot of things are labeled as AI that are not.
For example, I worked in health insurance claims processing. 10 years ago. For 11 years. We used software to help identify claims that were duplicates.
It would compare codes, dates, diagnosis codes.
A human reviews potential duplicates that weren’t exact matches. And matching information and mismatch information would be displayed for the human to review it.
Often it required a phone call to determine if the claim was a correction or if the date was wrong. Or any number of reasons. Sometimes a human could figure it out though.
Now days, such a program is being called “AI”. But it was just an program that helped identify possible duplicates, and then displayed the information a human would need to determine if it was or not.
The program also could auto deny claims if certain criteria was met that flagged it for being a “almost certainly” duplicate.
The program also could auto decide a claim was not a duplicate.
In that case, the duplicate review prompt didn’t even come up for a human to see. It was fully automated.
Most claims actually were processed fully with software. Only a small percentage required human intervention/review.
But yeah that was in place before I started there and was still in place when I left 10 years ago.
We didnt call it AI back then. We called it automated claims processing.
If we only count ai programs like LLMs, “machine learning algorithms”. That’s a completely different thing. And it’s kind of shitty.
In comparison to the claims software I used at the insurance company that was tailored and designed by a human at every step.
The distinction needs made when comparing how useful they are. Because it inflates the value of AI when effective software is being lumped in with it.
It’s important to remember that a lot of things are labeled as AI that are not.
For example, I worked in health insurance claims processing. 10 years ago. For 11 years. We used software to help identify claims that were duplicates. It would compare codes, dates, diagnosis codes. A human reviews potential duplicates that weren’t exact matches. And matching information and mismatch information would be displayed for the human to review it.
Often it required a phone call to determine if the claim was a correction or if the date was wrong. Or any number of reasons. Sometimes a human could figure it out though.
Now days, such a program is being called “AI”. But it was just an program that helped identify possible duplicates, and then displayed the information a human would need to determine if it was or not. The program also could auto deny claims if certain criteria was met that flagged it for being a “almost certainly” duplicate. The program also could auto decide a claim was not a duplicate.
In that case, the duplicate review prompt didn’t even come up for a human to see. It was fully automated.
Most claims actually were processed fully with software. Only a small percentage required human intervention/review.
But yeah that was in place before I started there and was still in place when I left 10 years ago.
We didnt call it AI back then. We called it automated claims processing.
If we only count ai programs like LLMs, “machine learning algorithms”. That’s a completely different thing. And it’s kind of shitty.
In comparison to the claims software I used at the insurance company that was tailored and designed by a human at every step.
The distinction needs made when comparing how useful they are. Because it inflates the value of AI when effective software is being lumped in with it.