This question is sharpened in light of the fact that the uncertainty in the usual radiocarbon readings (plus or minus 25 years or so) may be as large as the difference in dates in the debate. Measuring the remaining carbon-14 content in “long-term” organic samples, such as wood, will provide the date of growth of the tree, rather than the date of the archaeological stratum in which the sample was found.
Furthermore, wooden beams were reused in later strata, which can result in even greater differences in date.
Since these “long-term” samples may introduce the “old wood” effect, any calculation of precise absolute dates based on “long-term” samples is unreliable and may easily lead to errors of up to several decades or even more.
For this reason, researchers prefer to use “short-life” samples, such as seeds, grain or olive pits. In many studies, particular radio-carbon dates are not considered valid because they do not match the majority of dated samples from the site in question.
But it is much more useful regarding broader archaeological periods.
The differences in the various dates for the transition from Iron I to Iron IIa are too small to be helped much by radiocarbon dating.
Based on the material finds it is possible to compare sites and regions and create a cultural-chronological horizon.
Is radiocarbon dating accuracy indeed more reliable to determine Bible chronology than traditional dating methods that rely on archaeological evidence that looks at strata context? The material’s period of growth might be many decades from the era in which it was used or reused, say, in building construction.But the absolute date after calibration depends on which calibration formula is used. This uncertainty ranges from 20 years (for high-precision dating) through intermediate values of 50–100 years, and in some cases up to 100–150 years. For interpreting the results, different statistical models are used by different researchers.