Specificity and AI
Specificity and AI
H/T Gideon Rosenblatt
Humans still needed.
Project based training and retraining for AI.
Originally shared by Gideon Rosenblatt
AI Training Sensitivity
After recently writing a piece on human bias infiltrating AI, I've been thinking about the way that machine learning is so very dependent upon human training. It wasn't until reading this piece by Charlie Schick, however, that I realized just how sensitive these algorithms are to training sets:
Train the AI on one thing and it does it well. But then the AI can’t be generalized or repurposed to do something similar.
I’ve seen this in action, where we had a product that was great in mimicking medical billing coding that a human could do. After training the system for a specific institution, using that specific institution’s data, the system would then perform poorly when given data from another institution. We always had to train to the specific conditions to get useful results. And this applied to all our machine learning models: we always had to retrain for the specific (localized) data set. Rarely were results decent on novel though related data sets.
Alas, this cuts both ways. This allows us to train systems on local data to get the best result, but it also means we need people and time (and money) every time we shift to another data set.
http://www.molecularist.com/2016/10/come-down-to-earth-some-hidden-truths-about-ai.html
H/T Gideon Rosenblatt
Humans still needed.
Project based training and retraining for AI.
Originally shared by Gideon Rosenblatt
AI Training Sensitivity
After recently writing a piece on human bias infiltrating AI, I've been thinking about the way that machine learning is so very dependent upon human training. It wasn't until reading this piece by Charlie Schick, however, that I realized just how sensitive these algorithms are to training sets:
Train the AI on one thing and it does it well. But then the AI can’t be generalized or repurposed to do something similar.
I’ve seen this in action, where we had a product that was great in mimicking medical billing coding that a human could do. After training the system for a specific institution, using that specific institution’s data, the system would then perform poorly when given data from another institution. We always had to train to the specific conditions to get useful results. And this applied to all our machine learning models: we always had to retrain for the specific (localized) data set. Rarely were results decent on novel though related data sets.
Alas, this cuts both ways. This allows us to train systems on local data to get the best result, but it also means we need people and time (and money) every time we shift to another data set.
http://www.molecularist.com/2016/10/come-down-to-earth-some-hidden-truths-about-ai.html
Thanks for sharing, Zara Altair. Yeah, that's pretty much the headline, huh?
ReplyDeleteGideon Rosenblatt Yep. :)
ReplyDelete