I am using SHAP values to understand my model and how it’s working, trying to do some downstream sense-making (it’s a Regression task). Should I scale my SHAP values before working with them? I have always thought it’s not needed since it’s litterally a additive explanation of the prediction. What do you think?
submitted by /u/No_Fox2509 to r/learnmachinelearning
[link] [comments]
Laisser un commentaire