Despite their obvious advantages, computer simulations also introduce many challenges. Uncertainties must be identified and quantified in order to guarantee some level of predictive of a computational model. Calibration techniques can be applied to both improve the model and to curtail the loss of information caused by using simulations in place of the actual system. Evaluating and calibrating a computer model is dependent on three main components: experimental data, model parameters, and algorithmic techniques. Data is critical as it defines reality, and inadequate data can render model evaluation procedures useless. Model parameters should be studied to determine which affect the predictive capabilities of model and which do not. Those that do are subject to calibration. Techniques for model evaluation and calibration should both sufficiently sample the design space and limit the computational burden. In this talk, we will discuss the problems inherent in model calibration and validation processes in terms of data, parameters and algorithms. In particular, we will focus on suitable techniques of optimization and statistics for some specific numerical models from the areas of electrical engineering, medicine, and groundwater control. We will demonstrate some successful and some not so successful approaches.