Machine learning for malware classification shows encouraging results, but real deployments suffer from performance degradation as malware authors adapt their techniques to evade detection. This evolution of malware results in a phenomenon known as concept drift, as new examples become less and less like the original training examples. One promising method to cope with concept drift is classification with rejection in which examples that are likely to be misclassified are instead quarantined until they can be expertly analyzed. We revisit Transcend, a recently proposed framework for performing rejection based on conformal prediction theory. In particular, we provide a formal treatment of Transcend, enabling us to refine conformal evaluation theory—its underlying statistical engine—and gain a better understanding of the theoretical reasons for its effectiveness. In the process, we develop two additional conformal evaluators that match or surpass the performance of the original while significantly decreasing the computational overhead. We evaluate our extension on a large dataset that removes sources of experimental bias present in the original evaluation. Finally, to aid practitioners, we determine the optimal operational settings for a Transcend deployment and show how it can be applied to many popular learning algorithms. These insights support both old and new empirical findings, making Transcend a sound and practical solution, while shedding light on how rejection strategies may be further applied to the related problem of evasive adversarial inputs.