Cross-validation, separate training and test data sets, or AIC/BIC (AIC is more forgiving than BIC) if you can get a reasonable estimate of your "degrees of freedom". (For many models, however, d.f. is either not defined or intractable. For bagged or boosted trees, for example, you need CV or a test set.)
If you're data rich, you tend not to use CV but to have two or three sets. The reason 3 is better is because you ideally have (a) a training set for building models with known, fixed "hyperparameters" (e.g. regularization coefficients, tree sizes, neural net topologies), (b) a validation set for evaluating models with varying hyperparameters, in order to optimally select them, and (c) a test set on which you can evaluate the model for accuracy after your hyperparameters are chosen from b. Cross-validation is typically what you need to do when you have a small number of observations (say, 1000).
If you're data rich, you tend not to use CV but to have two or three sets. The reason 3 is better is because you ideally have (a) a training set for building models with known, fixed "hyperparameters" (e.g. regularization coefficients, tree sizes, neural net topologies), (b) a validation set for evaluating models with varying hyperparameters, in order to optimally select them, and (c) a test set on which you can evaluate the model for accuracy after your hyperparameters are chosen from b. Cross-validation is typically what you need to do when you have a small number of observations (say, 1000).