A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters δ and C of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, d’ and c. Those methods have been mostly assessed by comparing the empirical means and variances in simulation studies with the calculations done with the parametric values of the probabilities of giving a yes response on a signal trial (hits) and on a noise trial (false alarms). In practical contexts the variance must be estimated from estimations of those probabilities (empirical rates of hits and false alarms). The three methods to estimate the variance compared in the present simulation study are based in the binomial distribution of Miller, the normal approach of Gourevitch and Galanter and the maximum likelihood method proposed by Dorfman and Alf. They are compared in terms of relative bias (accuracy) and the mean squared error (precision). The results show that the last two methods behave indistinguishably for practical purposes and provide severe over-estimation errors in a range of situations that while not the most common are perfectly credible in several practical contexts. By contrast, the method of Miller provides better results (or at least similar) in all conditions studied. It is the recommended method to obtain estimates of the variances of these statistics for practical purposes.