The best way to pick a final model and estimate its error after performing 10-fold cross-validation is:
(c) average all of the 10 models you got; use the average CV error as its error estimate
The reason for this is that 10-fold cross-validation is performed to get a more reliable estimate of the model's performance by training and evaluating the model on different subsets of the data. Taking the average of the 10 models helps to reduce the variability in the performance estimates and provides a more stable and representative measure of the model's generalization performance.
Options (a) and (b) suggest picking any single model from the 10-fold cross-validation, which may be influenced by the random splitting of the data and might not provide a robust estimate. Options (d) and (e) involve training on the full dataset, which might lead to overfitting to the training data and may not reflect the model's ability to generalize to new, unseen data. Therefore, (c) is generally the recommended approach for model selection and error estimation after cross-validation.