The goodness of fit returned for our input data set shown in Figure 2.5 (page ) is Q = 0.053. Is it good or bad? If you have a look at Figure 2.8 (page ) you'll notice that many data points are removed from the straight line returned by fitting by more than the error bars. Q tells us that our hypothesis that the dependence is linear may not be very good in this case.
If we were to double the size of the error bars, as shown in Figure 2.21, Q returned by our fitting program would become 0.872 - this is already quite close to 1. What the goodness of fit criterion tells us this time is as follows: assuming that the theory about a linear dependence between x and y is correct, the probability that a fit returning as high as 3.828 (which is the case here) would occur due to chance fluctuation of measurement errors in our data is 87%. In other words, the observed data scatter, given existing error bars, can be due to random errors and not due to a possibly incorrect assumption that y = a + x b.
It is important that you understand the meaning of Q well. By evaluating Q you get a numerical measure of the validity of your theory, in view of collected experimental data.