Wikiページ 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' の削除は元に戻せません。 続行しますか?
Machine-learning models can fail when they try to make predictions for people who were underrepresented in the datasets they were trained on.
For example, setiathome.berkeley.edu a model that predicts the very best treatment option for somebody with a chronic illness might be trained utilizing a dataset that contains mainly male patients. That design may make inaccurate predictions for female patients when released in a healthcare facility.
To enhance outcomes, engineers can try stabilizing the training dataset by eliminating information points till all subgroups are represented equally. While dataset balancing is appealing, annunciogratis.net it needs removing big quantity of data, harming the design’s total efficiency.
MIT researchers established a brand-new method that recognizes and eliminates specific points in a training dataset that contribute most to a model’s failures on minority subgroups. By eliminating far fewer datapoints than other techniques, thatswhathappened.wiki this strategy maintains the general accuracy of the model while enhancing its performance regarding underrepresented groups.
In addition, equipifieds.com the method can recognize covert sources of predisposition in a training dataset that lacks labels. Unlabeled information are far more widespread than labeled information for numerous applications.
This approach might likewise be integrated with other techniques to improve the fairness of machine-learning models released in high-stakes scenarios. For instance, it may one day help ensure underrepresented clients aren’t misdiagnosed due to a prejudiced AI model.
“Many other algorithms that attempt to resolve this issue presume each datapoint matters as much as every other datapoint. In this paper, we are showing that presumption is not real. There are particular points in our dataset that are adding to this predisposition, and we can discover those data points, remove them, and improve performance,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate trainee at MIT and co-lead author of a paper on this method.
She composed the paper with co-lead authors Saachi Jain PhD ‘24 and fellow EECS graduate trainee Kristian Georgiev
Wikiページ 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' の削除は元に戻せません。 続行しますか?