|Themes > Science > Life Sciences > Physical Anthropology > Evolution Should not be Taught as Fact > Radioactive Dating Techniques|
Most methods of radioactive dating used today were developed at the beginning of this century. Atoms of radioactive substances have a tendency to break down into more stable, smaller atoms. It is during this breakdown process that various types of rays are released in the form of energy. Every radioactive element decays at its own particular rate and we have learned that within that element the atoms themselves break down at different rates. One particular atom might break down within the period of a few seconds. Others might take several thousand years. 'Half-Life' is the name given to the time taken for the radioactive emission of a radioactive element to drop by half of its original value. The half-life of the element is used in calculating the age of the sample.
Uranium is found in measurable quantities
in most rocks. Uranium decays into lead and it is assumed that the rate of
decay i.e. half-life is 4500 million years.
First assumption - that the minerals in full quantity were there in the rocks for millions and millions of years in a closed system or ideal system. However, that is not possible in the rocks.
In uranium ore the lead is measured.
In volcanic rocks the argon gas is measured.
Having made the first assumption of a closed system, then comes the second assumption.
Second assumption - it is now assumed that none of the lead and none of the argon were present in the rocks when the rocks were formed.
Example : Uranium dating presupposes that 100% uranium and 0% lead were present in the sample at the time of its formation.
Some scientists date using a 50% lead / 50% uranium initial quantities. Others assume 50% lead / 50% uranium initial quantities in addition to a loss of 50% uranium per half life (loss of uranium through leaching with weak acids etc., has been known to be able to remove up to 90% of the uranium content). And so, where one scientist dates a rock at 200 million years, another scientist will date the rock at 100 million years or even less.
Third assumption - radioactive decay rates never vary. This is unfeasible and untrue from a physics perspective! Decay rates will and do vary!
If, for instance, cosmic radiation varies and cosmic radiation almost certainly does vary, then supernova explosions in nearby stars will affect cosmic radiation.
Dr Frederick Junemann commented that the effects of such super explosions would knock our radioactive dating measurements into disrepute. It would "throw into doubt the age of the earth and of the universe and of the dating of prehistoric artifacts."
We are made to believe in the name of science that the rocks have been precisely dated by radioactive methods. The truth is otherwise. According to one authority, out of the hundreds and hundreds of radioactive measurements of the rocks, if we discard those that are unreliable or contradictory, we are left with three that are the basis of the modern time scale. The others are derived from these three, by reasoned guesses and presuppositions. And of these precious three, only one, the Cambrian shales of Sweden, squares with palaoentology. Even this one is open to severe doubt, according to Knoff, and Henry Faell.
It has been authoritatively stated that at present, no coherent picture of the earth could be built on the basis of radioactive dating. There are rocks that were formed in historic times; rocks whose exact age are known and it is interesting to see what happened when radioactive dating was applied to them.
Volcanoes make rocks - basalt rocks. In the sea at Hawaii there are basalt rocks formed by volcanoes less than two hundred years ago. These new rocks were dated using potassium-argon dating and ages of twenty million years, one hundred and sixty million years, and even three billion years were calculated.
When rock from Sweden, which incidentally contained fossils, was analysed using the uranium dating methods, ages ranging from 380 to 800 million years were calculated.
From Norway, France, Germany and Russia reports continually show that the uranium method and the potassium/argon method are giving ages of millions and millions of years for rocks that are known to be only a few hundred years old. Rocks of known age have exposed this delusive aspect, which is inherent in radioactive dating.
Twenty-two volcanic rocks formed over the past 200 years, from different parts of the world, gave ages of 100 million to 10,000 million years !
Scientists assert that our planet is 4.5 billion years old using the methods of dating outlined above, and evolution depends on these time spans (evolution really depends on in perpetuity time spans).
Surely, the question of these ages must be called into question, owing to the unreliability of the methods used. If an engineer makes a measurement of a bicycle wheel and calculates values ranging from 2m to 200,000 miles then it is blatantly obvious that his methodology and technique of measurement are clearly wrong. It is scientifically doubtful that the true age can be determined, and more so when the methodology rests upon nonsensical assumptions.
Why are we never told of the countless
mishaps of radioactive dating? Perhaps we can draw or own conclusions...
Carbon 14 dating is used on organic remains such as bones, wood and coal. It is strictly limited to short term dating of up to 30,000 years maximum. In fact, it is only probably reliable up to 3000 years. The Geochron laboratory in Massachusetts in America refuses to use it beyond three thousand years, claiming that it is unreliable beyond that.
Carbon, that black substance in charred wood, comes in several forms. One less-common form has atoms which are 14 times as heavy as hydrogen atoms. It is called carbon 14, or C14 for short. Unlike common carbon (C12), carbon 14 disintegrates or "falls to pieces" relatively easily. This instability makes it radioactive.Carbon 14, or radiocarbon as it is often called, is manufactured in the upper atmosphere by the action of cosmic rays. Ordinary nitrogen (N14) is converted into C14. Ordinary carbon (C12) is found in the carbon dioxide in the air we breathe, which, of course, is cycled by plants and animals throughout nature, so that your body, or the leaf of a tree, or even a piece of wooden furniture, contains carbon. When C14 has been formed, it behaves just like ordinary carbon (C12), combining with oxygen to give carbon dioxide (C14O2), and also gets freely cycled through the cells of all plants and animals. The difference is this: once C14 has been formed, it begins to decay radioactively back to N14, at a rate of change which can be measured.
If we take a sample of air, and measure how many C12 atoms there are for every C14 atom, this is called the C14/C12 ratio. Because C14 is so well mixed up with the C12, we expect to find that this ratio is the same if we sample a leaf from a tree, or a part of your body.
Think of it as a teaspoon of cocoa mixed into a cake dough: after a while, the ratio of cocoa to flour particles would be roughly the same, no matter which part of the cake you sampled. The fact that the C14 atoms are changing back to N14 doesn't matter in a living thing. Because it is constantly exchanging carbon with its surroundings, the mixture will be the same as in the atmosphere and in all living things.
How the "Carbon Clock" Works
As soon as a plant or animal dies, however, the C14 atoms which decay are no longer replaced by new ones from outside, so the amount of C14 in that once-living thing gets smaller and smaller as time goes on. Another way of saying it is that the C14/C12 ratio gets smaller. In other words, we have a clock which starts ticking at the moment something dies.
Obviously, this works only for things which once contained carbon - it can't be used to date rocks and minerals, for example. We know how quickly C14 decays, and so it becomes possible to measure how long it has been since the plant or animal died.
The Key Assumption Behind the Method
But wait - how do we know what the C14/C12 ratio was to start with? We obviously need to know this to be able to work out at what point the clock began to tick. We've seen that it would have been the same as in the atmosphere at the time the specimen died, so how do we know what that was? Do scientists assume that it was the same in the past as it is now? Well, not exactly. It is well known that the industrial revolution, with its burning of huge masses of coal, etc., has upset the natural carbon balance by releasing huge quantities of C12 into the air, for example. Tree-ring studies can tell us what the C14/C12 ratio was like before the industrial revolution, and all radiocarbon dating is made with this in mind. However, how do we know what the ratio was before then - let's say thousands of years ago?
It is assumed that the ratio has been constant for a very long time before the industrial revolution. Is this assumption correct (for on it hangs the whole validity of the system)? Why did W. F. Libby, the brilliant discoverer of this method, assume this? We know that C14 is continually entering the atmosphere (and hence the carbon cycle), and that C14 is continually leaving the system by its decay back to N14. The more you have of a radioactive substance, the more there is to decay--that is, as more enters a system, the rate of leaving the system increases.
To understand this, let us use the example of a rainwater tank, representing the system, with evenly spaced holes in the sides. Let's switch on a tap at the top, representing the formation of C14, entering the system at a constant rate (a). At first, the rate of entry will be far greater than the rate of exit, allowing the water (C14) to build up. The more it accumulates, however, the more the rate of exit, until the amount pouring in is the same as the amount pouring out (b). That is, from the moment of switching-on, the C14 level will build up, rapidly at first, then gradually taper off until it reaches the steady state. Libby, along with almost all the scientists of his day, assumed that this steady state had been reached long ago, and that C14 would now be entering and leaving the system at the same rate.
Why? Because calculations show that it would take only 30,000 years from switch-on (the first time cosmic rays began to bombard the atmosphere) for this to happen, and of course geologists and others had by then long since persuaded most people that the earth was much, much older than that. In other words, C14 would have been in a steady state for many millions of years already, if the earth were that old.
What Do Measurements Show?
What about modern, more sophisticated measurements? Unfortunately for the old earth advocates, these continue to support a real difference between the rate of production and the rate of disintegration. For instance, the following figures quoted from nuclear chemists Fairhall and Young suggest that it is as much as 50 per cent out of balance.
"We note in passing that the total natural C14 inventory of 2.16 x 1030 atoms . . . corresponds to a C14 decay rate of 1.63 x 104 disintegrations/m2s of the earth, considerably below the estimated production rate of C14 atoms averaged over the last 10 solar cycles (111 years) of 2.5 x 104 (+ 0.5 x 104) atoms/m2s. ... The source of the discrepancy is ... unknown unless the present day production rate is indeed significantly higher than the average production rate ..." (Fairhall, A. W. and Young, J. A., 1970. "Radionuclides in the Environment", Advances in Chemistry, vol. 93, p. 402.)
However, there are many complexities and
inaccuracies in these measurements. Some have used a new, non-uniform
model based on an average imbalance of some 35 percent, to establish a
re-calibration scale which would mean that the older dates have to be more
greatly reduced than later ones. This seems in order at first glance, as
does the use of the imbalance data to establish an upper limit to the age
of the earth's atmosphere of some 7,000-10,000 years.
In any case, even the incorrect uniform model has given, in many cases, serious embarrassment to the evolutionist by giving ages which are much younger than those he expects in terms of his model of Earth history. Consider this: if a specimen is older than 50,000 years, it has been calculated that it would have such a small amount of C14 that for practical purposes it would show an infinite radiocarbon age. So it was expected that most deposits such as coal, gas, etc., would be undatable by this method. In fact, of thousands of dates in the journals Radiocarbon and Science to 1968, only a handful were classed "undatable" - most were of the sort which should have been in this category. This is especially remarkable with samples of coal and gas supposedly produced in the Carboniferous period 300 million years ago!
Bones of a sabre-toothed tiger from the LaBrea tar pits (near Los Angeles), supposedly 100,000 to 1000,000 years old, gave a date of 28,000 years. (Radiocarbon, vol. 10, 1968)
Other C14 "Discrepancies"
In addition to the above effects, which are more or less systematic, there are other possible sources of error in C14 dating. In the light of all this, it would be foolhardy indeed to insist that a C14 date represents absolute truth.
Consider these examples of C14 results:
A freshly killed seal dated by C14 showed it had died 1300 years ago. (Antarctic Journal, vol. 6, [September-October 1971], p. 211.)
Living mollusk shells were dated at up to 2,300 years old. (Science, vol. 141, 1963, pp. 634-637.)
Living snails' shells showed they had died 27,000 years ago. (Science, vol. 224, 1984, pp. 58-61.)A quotation from a respected anthropological journal highlights the nature of the problem:
"The troubles of the
radiocarbon dating method are undeniably deep and serious ... It should be
no surprise, then, that fully half of the dates are rejected. The wonder
is, surely, that the remaining half come to be accepted."
(Lee, R. E., Radiocarbon, "Ages in Error", Anthropological
Journal of Canada, 1981, vol. 19, No. 3, p. 9)
Draw your own conclusions from this.......