Genomic Prediction (GP) is used in plant breeding to help find the best genotypes for selection. Here, one of the most important measures of GP, whose estimation is of the utmost importance for Genomic Selection (GS), is predictive accuracy. Additionally, the models of choice for analyzing field data are regression models. However, models that lean on classical likelihood are known to perform poorly when their underlying assumptions are violated. For example, the violation of the normality assumption may lead to biased parameter estimates, derail the coefficient of determination, hinder accurate variable selection and, in the case of the linear mixed model, interfere with variance components estimation. In plant genetics, these biases often translate into inaccurate heritability and predictive ability estimates that may, in turn, negatively impact predictive accuracy and thus GS. Since phenotypic data are prone to contamination and, therefore, almost never confirm to the normality assumption, one way of improving GS necessarily involves the improvement of the methods used in the estimation of these aforementioned measures. Here, robust statistical methods find a natural framework, since they are known to overcome some of the handicaps of the classical methodology, one of which is the departure from the normality premise. Therefore, a robust approach to a recently proposed two-stage method from the literature is presented. The introduced robust methodology is compared with its classical counterpart through simulation. Finally, an example of application involving a maize data set is given so that the adequacy and usefulness of the robust approach may be validated.