Hi everyone,
I wanna know if anyone else has faced a similar situation in their data science projects. At the beginning of a recent project, my team and I settled on a model performance metric that we believed would adequately capture the success of our efforts. However, as we started running experiments, we quickly realized that the models we were developing just couldn’t meet the acceptance criteria we had set.
After discussing it with my team, we came to the conclusion that we might need to rethink our approach and consider adjusting the performance metric. This was a bit daunting, as I didn’t want to seem like we were lowering our standards, but I also wanted to ensure that we were being realistic about what we could achieve with the data and algorithms at our disposal.
Has anyone else suggested a change in performance metrics to their stakeholders mid-project? How did you approach the conversation, and what was the outcome? Any tips on how to navigate this kind of situation