we are well aware of model drift, where the input variables (the real world), over time, become different from the data distribution the model was trained on. It’s not so much the model that drifts here but rather the input data.

With LLMs the models themselves also ‘drift’ in that

🔄 they are unpredictable (cf the temperature setting) and
🆕 when a new version of the model is released, as will inevitably happen with ChatGPT,  Claude and all the others, we do not know how the model will react.

so be ready to closely monitor and then revise all of your carefully crafted prompts. Did you put that part in the business case?