Adversarial Validation Approach to Concept Drift Problem in User Targeting Automation Systems at Uber- Previous research on concept drift mostly proposed model retraining after observing performance decreases. However, this approach is suboptimal because the system fixes the problem only after suffering from poor performance on new data. Here, we introduce an adversarial validation approach to concept drift problems in user targeting automation systems. With our approach, the system detects concept drift in new data before making inference, trains a model, and produces predictions adapted to the new data.
Drift estimator between data sets using random forest, the formula is in the medium article above, code here atmlBOX
Alibi-detect- is an open-source Python library focused on outlier, adversarial, and drift detection, by Seldon.