Researchers at the University of Edinburgh have discovered a signaling circuit of highly flexible nanotubes in eukaryote cells. From the news release:
Professor Mark Evans, of the University of Edinburgh’s Centre for Discovery Brain Sciences, said: “We found that cell function is coordinated by a network of nanotubes, similar to the carbon nanotubes you find in a computer microprocessor.
“The most striking thing is that this circuit is highly flexible, as this cell-wide web can rapidly reconfigure to deliver different outputs in a manner determined by the information received by and relayed from the nucleus. This is something no man-made microprocessors or circuit boards are yet capable of achieving.”
Scientific progress has not been kind to David Hume’s criticism of the analogy argument for design. At the anatomical level, which Hume observed, any similarity between life and human technology can be hard to discern. But as science progressed and the electron microscope has allowed us to peer into the cell, the analogy has returned with a vengeance. Where would our modern understanding of cell biology be without concepts like regulation, control, signals, receptors, messengers, codes, transcription, translation, editing, proofreading, etc.?
You might even say that science has been well-served by a healthy dose of methodological designism.
Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of “blueprints” for the construction of organisms. Likewise, cells are often characterized as “factories” and organisms themselves become analogous to machines. […] In this article we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.
First of all, it is interesting to note the motivation behind wanting to abolish “machine-information metaphors” from science: It “play[s] into the hands of pseudoscientists” – i.e. people who are friendly towards intelligent design. In other words, this is a political call-to-action to stop giving aid and comfort to the enemy.
Second of all, the authors choose to criticize the “blueprint analogy”, which is indeed not a very helpful analogy. The human genome contains nothing that can be likened to a drawing of a human. It contains the instructions for synthesizing the proteins and ribozymes which are necessary for the development and functioning of the human body, but likening this to a blueprint isn’t a very helpful analogy.
The “genome as a blueprint” is an example of a faulty analogy, comparing an analog object (a blueprint) with what’s really a digital technology (encoding information as a symbolic language). As our technology has improved, closer approaching the sophistication found in cells, our ability to formulate accurate analogies has improved as well.
Ironically the blueprint analogy fails, not because design analogies in general fail, but because it tries to establish an analogy to a technology that is insufficiently sophisticated to measure up to that found in the cell.
Unlike the blueprint analogy, analogies to computer technology continue to generate heuristic dividends for science. Does anyone believe science understanding would be improved if we had to describe, say, protein synthesis without using concepts like code, sequence, transcription, translation, etc.?
Originally posted on Telic Thoughts, on October 17th, 2005. Reposted with minor edits.
I’ve talked about the field of evo devo, and how it opens for the possibility of front-loading, the view that the first organisms were designed with a future state in mind. Now, I’ll turn to one of the biggest perceived problems of front-loading, namely the problem of how the front-loaded structures are protected from the culling hand of mutation.
A gene that’s used for something of vital importance to the organism (or even just moderately useful) is going to be protected by natural selection. If the gene mutates in a way that makes it inoperable, the unfortunate offspring will die before siring any of its own, taking the mutated gene to its grave. Even if the gene isn’t absolutely vital, the mutant will do worse than its contemporaries, and its lineage will soon disappear. But if the gene serves no purpose, an organism will be neither better nor worse off if a mutation knocks out the gene, and it will soon decay to unrecognizable gibberish. Developmental biologist PZ Myers illustrates the problem, in his characteristically subtle and friendly style:
“Front-loading” is complete and utter bullshit. It doesn’t make sense, it contradicts how we know molecular biology works, and there is no evidence for it anywhere, nor is there any known mechanism for preserving large quantities of unused genetic information intact within a genome for billions of years. Fans of it usually have to resort to a bizarre deterministic view of inheritance that is absurdly fixed–they are a kind of faux-scientific Calvinist.
To be a complex, multicellular organism you need the ability to differentiate your cells, so that some become nerve cells while others become liver cells. This is the function served by hox genes and other tool kit genes, as described in the first link above. If, as I suggested in the second link, the first eukaryotes were designed with multicellularity in mind, we must face the question of which purpose hox genes can serve in a unicellular organism. Indeed, what use does an organism that only consists of a single cell have for differentiating its cells? It is this concern that causes Myers and other critics to worry about “preserving large quantities of unused genetic information intact within a genome for billions of years”. What they’re missing is that there are more than one way to differentiate cells.
The differentiation of multicellular organism is spatial – “this cell is different from that cell over there” – but differentiation can also be temporal – “the cell now is different from the cell later”. In fact, a unicellular organism could use hox genes to adapt to the changing demands of the environment. It would be useful for a cell to tailor its response to changing levels of nutrients, presence of potential mating partners, the cycle of night and day, etc.
Perhaps the first eukaryotes were equipped with receptors, which reacted to things like temperatur changes or the presence of certain substances. This would then cause some genes to become expressed, while others become silenced, thereby changing the phenotype of the cell. Unicellular organism that formed large communities (as green algae does today) could then have co-opted this machinery to achieve a division of labor among cells, leading to the evolution of complex, multicellular organisms.
Please note that the above is basically a “could’ve happened” story. I’m not trying to get anyone to embrace the beliefs that this is how multicellularity really evolved, or that eukaryotes were designed. The point of this post is simply to illustrate how teleologists can think about the origin of multicellularity, and to demonstrate that front-loading needs not involve the preservation of “large quantities of unused genetic information intact within a genome for billions of years.”
Originally posted on Telic Thoughts, on August 24th, 2005. Reposted with minor edits.
In my previous post, I described how organisms use tool kit genes to construct their bodyplans, as well as the discovery that these genes date far longer back than anyone imagined. In this post, I’ll go a step further, showing how these facts meshes with intelligent design, allowing me to make a prediction as to what future research will uncover.
To revisit what I wrote, organisms use tool kit genes to map out their bodies during development, allowing the construction of structures such as limbs and eyes to be carried out at the right places. Since the tool kit genes themselves don’t build anything, they can be coupled to various processes, and employed in the construction of different structures in different organisms. This explains why it’s possible for the same tool kit genes to be used in the construction of things as different as the limbs of flies and mice, the last common ancestor of which probably didn’t even have limbs.
These unexpected findings, made in the field of evolutionary development, or “evo devo”, show a way towards the synthesis of two seemingly opposed ideas: Intelligent design and evolution. Rather than thinking of evolution as something the designer would avoid, evo devo allows us to see organisms as designs for evolving. Instead of having to re-invent everything, evolution may be a matter of throwing the right switches, employing the existing tool kit in new ways. This is in fact the perspective of front-loading, the conjecture that the first organisms were designed with their future evolution in mind.
If we assume that eukaryotes were designed with the purpose of giving rise to multicellular organisms, we can make certain predictions. For one, we would expect the first eukaryotes to have contained a predecessor to the modern tool kit, and it’s possible that some unicellular eukaryotes still possess it. It will probably not be the full set possessed by modern organisms (or rather, full sets, as several organisms differ in the number of genes they have), as some genes may have been generated through gene duplications, but I definitely expect genes that are clear precursors to modern tool kit genes to be found in unicellular eukaryotes.
There are more aspects to this story (which will be the topic of yet another post), but for now, let me just draw attention to this: Contrary to the claims of many ID critics, ID need not be an amorphous mass, skipping from one gap to the next as scientific knowledge fills them out. As the quote from Mayr shows, neo-darwinism was quite comfortable with the belief that widely different genes were used to construct different bodyplans. But no one has suggested that we abandon neo-darwinism because those genes turned out to be similar. Front-loading, on the other hand, wouldn’t hold much appeal in a world in which structures like the limbs of flies and mice were constructed using entirely different genes. This doesn’t mean that neo-darwinism is wrong, but it should be a cause for pause for those critics intent on dismissing ID on the basis of a perceived lack of testability.
In relation to this is the claim that ID is incapable of inspiring research. As Mike Gene writes: “Those who loudly proclaim that ID is useless often share one defining trait that is common among all, namely, they all have no experience in trying to seriously employ ID to better understand the world.” Many will attempt to make up for this lack of experience by using rhetoric, presenting arguments intended to show that the concept of a designer is inherently unscientific. But experience trumps rhetoric, and in this case, ID has helped me make a prediction that points the way to experimental research.
Originally posted on Telic Thoughts, on August 22nd, 2005. Reposted with minor edits.
Do you remember when, in school, your teacher asked you to write an essay about your summer vacation? Actually, I’ve never had a teacher give me such a excruciatingly dull assignment, but everyone else seems to have suffered it, so hopefully there’s a couple of you nodding in painful remembrance right now. Anyway, since I’ve never got to write this essay, I thought I’d assign myself this job.
So, what I did I do during my summer vacation? I read a lot of good books, one of which was Sean Carroll’s Endless Forms Most Beautiful. Carroll is one of the pioneers in the new field that has developed in the interplay between evolutionary and developmental biology, also known as “evo devo”, and Endless Forms is a good introduction for the interested layman. A prominent theme is the surprising conservation of the genes used to construct animal bodyplans, or, as Carroll repeatedly puts it, how very old genes have been taught new tricks. Consider the traditional view, as expressed by Ernst Mayr, in a passage also quoted by Carroll:
“Much that has been learned about gene physiology makes it evident that the search for homologous genes is quite futile except in very close relatives. If there is only one efficient solution for a certain functional demand, very different gene complexes will come up with the same solution, no matter how different the pathway by which it is achieved. The saying “Many roads lead to Rome” is as true in evolution as in daily affairs.”
Ernst Mayr, Animal Species and Evolution (Harvard University Press, 1963), p. 609
As Carroll immediately adds, “This view was entirely incorrect.” One of the findings of evo devo was that structures such as limbs and eyes were constructed using a “genetic tool kit”, which dates way back and which has been conserved across widely divergent lineages. Take the legs a fly, the paws of a mouse, and the tube “feet” of a sea urchin. These are constructed in entirely different ways, and it was assumed that they had each arisen independently of each other. Imagine the surprise when it was discovered that the same gene, distal-less, or dll for short, played a role in the development of all of these limbs.
This raises several questions, one of which is: If genes determine body shape, and if these organisms have similar genes, then why are their legs so different? The short answer is that genes by themselves really don’t determine body shape, despite whatever the writings of Richard Dawkins may have led you to believe. The slightly longer answer is that body shape (also known as the phenotype) is the result of an interplay between the genes (also known as the genotype) and the cell, as well as the environment the organism finds itself in. And the really long and detailed answer is the one that Carroll gives in Endless Forms Most Beautiful, which I’ll try to sketch out in the following.
On the right are the embryos of a fly and a mouse, colored to illustrate where various tool kit genes (represented by the blocks in the middle) are expressed. These and many other genes map the embryo, not just from back to front, but also from top to bottom and from inside to outside, meaning that any point on the embryo can be pinpointed by the combination of tool kit genes expressed in that particular place.
You’ve probably heard of “junk DNA”, and how it has been shown to play important roles in the development of animals. This is related to the way researchers look for the functions of DNA sequences. DNA is transcribed into RNA, which is again translated into proteins, and if one looked for when and where a particular sequence of DNA resulted in proteins, one could investigate its function. But a lot of the genome consists of sequences that are never transcribed into anything, yet still play a vital role in development. These sequences, which Carroll refers to as “genetic switches”, function as handles for the proteins transcribed from the tool kit genes. Depending on the tool kit protein, they can either turn a gene “on” or “off”, allowing the organism to specify exactly where certain genes will be expressed. Imagine a gene in the fly, X, the genetic switch of which allows it to be turned on by the proteins produced by the genes lab, Dfd, and AbdB (click on the image to see it in full size). This will be expressed in the segments marked with blue, red, and green, whereas another gene, Y, which is only activated by lab and Dfd, will just be expressed in the blue and red zones.
This description should help us see how the behavior of cells isn’t determined just by the genes they contain. A cell in my eye and one in my finger contain the same genotype, yet what they do with it depends on the substances they’ve been exposed to during development, and the molecules bound to their genome. This allows us to answer the question we started with: How come the legs of flies and mice are so different, when they use similar genes to construct them? The answer is that the tool kit can be used to activate different genes in different animals, and thus be recruited in the construction of something as different as a fly and a mouse.
So far, this has been standard evo devo, and besides some teleologically loaded terms like “tool kit” and “switch” (which I’m sure Rudy Raff will propose some suitable alternatives to), nothing that’ll get you branded as a heretic. But there are some interesting implications for ID lurking down there, which I hope to draw out in a later posting.
Originally posted on Telic Thoughts, on June 24th, 2005. Reposted with minor edits.
This week there’s been two good articles about how much of the machinery of animals dates farback. The first is by Frank Zimmer in The New York Times, titled “Plain, Simple, Primitive? Not the Jellyfish”. The jellyfish is part of the cnidarians, the family relationship of which is depicted here:
Cnidarians are radially symmetrical, meaning that they’re symmetrical around several axes, like the spokes in a bicycle wheel, whereas all the organisms depicted to the right of the cnidarian are bilaterally symmetrical, meaning they’re only symmetrical around the head-to-tail axis (except for the echinoderms, which evolved radial symmetry independently). Bilateral animals use a special genetic toolkit for constructing their bodies, and cnidarians were thought to be an evolutionary relic from before this toolkit evolved. But recent findings have overturned this belief:
“Much to their surprise, the scientists found that some genes switched on in embryos were nearly identical to the genes that determined the head-to-tail axis of bilaterians, including humans. More surprisingly, the genes switched on in the same head-to-tail pattern as in bilaterians.
Further studies showed that cnidarians used other genes from the bilaterian tool kit. The same genes that patterned the front and back of the bilaterian embryo, for example, were produced on opposite sides of the anemone embryo.
The findings have these scientists wondering why cnidarians use such a complex set of body-building genes when their bodies end up looking so simple. They have concluded that cnidarians may be more complicated than they appear, particularly in their nervous systems. “
The second article is from this week’s Nature, “Back to our roots” (registration required) by Helen Pilcher. It shows how the genetic toolkit has been pushed even further back in time, to before the evolution of sponges, indicating they were present in the urmetazoan, the common ancestor of all animals:
“There are signs that many other molecules associated with development in animals also occur in sponges. The Wnt family of proteins, for example, influences how cells become specialized and also helps to lay down the key spatial coordinates of the body plan in complex animals. Sponge cells make the Frizzled protein, a receptor that is activated by Wnt proteins. And they also make a variety of metazoan-like transcription factors – proteins involved in controlling gene expression – that are key players in development.
The fact that these genes occur during development in all existing animal lineages hints that they were playing a regulatory role in the embryos of the first metazoan. “The urmetazoan was probably quite sophisticated in a developmental and genomic sense,” says [Bernard] Degnan [who is a geneticist at the University of Queensland]. This suggests that it already had the genetic toolkit to direct a body plan containing multiple cell types.
To find out where this toolkit came from, biologists are looking even further back in time, at the single-celled ancestors of the urmetazoan. Their modern-day descendants are choanoflagellates, unicellular creatures that look uncannily like sponge collar cells. Surprisingly, choanoflagellates harbour many of the tools needed for multicellular living.”
This sheds light on a possibility off-handedly proposed by Michael Behe and later developed by Mike Gene: What if the first lifeforms were designed, containing the structures needed for the evolution of more complex organisms? If this is the case, I’d expect this “genetic toolkit” to trace back to unicellular organisms.
Acid-sensing ion channels (ASICs) play a key role in the vertebrate nervous system where they enable neurons to convert chemical information into electrical current. In a recent paper in PNAS, Lynagh et al. writes that ASICs emerged over 600 million years ago and thus predates the origin of vertebrates.
“The conversion of extracellular chemical signals into electrical current across the cell membrane is a defining characteristic of the nervous system. This is mediated by proteins, such as acid-sensing ion channels (ASICs), membrane-bound receptors whose activation by decreased extracellular pH opens an intrinsic membrane-spanning sodium channel. Curiously, ASICs had only been reported in vertebrates, despite the homology of many other ion channels in vertebrates and invertebrates. Using molecular phylogenetics and electrophysiological recordings, we discover ASICs from tunicates, lancelets, sea urchins, starfish, and acorn worms. This shows that ASICs evolved much earlier than previously thought and suggests that their role in the nervous system is conserved across numerous animal phyla.”
The discovery of what was previously thought as vertebrate-specific genetic circuitry in organisms like tunicates fits very well with front-loaded evolution, the notion that the first life was designed with its future evolution in mind.
Lynagh T., Mikhaleva Y., Colding J.M., Glover J.C., Pless S.A., 2018, “Acid-sensing ion channels emerged over 600 Mya and are conserved throughout the deuterostomes”, PNAS115(33):8430-8435
Despite its breathtaking diversity at the morphological level, life on Earth displays a remarkable unity at the biochemical level. With few exceptions, all lifeforms employ DNA as their hereditary material, proteins constructed from the same 20 types of amino acids as their building blocks, and RNA to bridge the two worlds through the genetic code.
As the morse code describes the relationship between dots/dashes and the Latin alphabet, so does the genetic code describe the relationship between the codons of DNA and the amino acids of proteins. Like French, the language of the cell is easy: The codon “GUU” specifies valine, and it’s like that all the way through. If in doubt, consult the figure below.
The genetic code is almost universal; a number of variants have been found, all of which are derived from the standard genetic code (Osawa et al., 1992; Knight, Freeland, & Landweber, 2001). In other words, there are no precursors to the genetic code.
The genetic code as evidence for common descent?
The near-universality of the genetic code has been cited as evidence for universal common ancestry (Crick, 1968; Hinegardner & Engelberg, 1963). According to geneticist Theodosius Dobzhansky, these biochemical universals are “the most impressive” evidence for the interrelationship of all life (1973, p. 128).
And indeed, from the non-teleological perspective this makes sense: If undirected abiogenesis had occurred several times, it would be an amazing coincidence if in every case the resulting organisms had struck upon the same genetic code. Therefore, universal common ancestry is the best explanation.
This changes the moment we throw teleology into the mix. Rather than having to choose between common descent and convergence, the investigator must now also consider the possibility of common design.
The genetic code as example of common design
Suppose that the first life on Earth consisted of a diverse population of engineered cells. Why would engineers employ the same genetic code instead of giving each cell its own code?
Before answering this question, let us ask a counter-question: Why not? What would the point be, from an engineering perspective, to reinvent the wheel? Making multiple codes is extra work and increases the risk of mistakes when genes have to be designed in different languages.
Not only is there no reason for engineers to adopt multiple codes, there is good reason to use the same code. If different cell types used different codes, they would be unable to tap into the power of horisontal gene transfer (HGT).
HGT plays an essential role in bacterial evolution, where genetic models indicate that substantial HGT is required for the survival of bacterial populations (Takeuchi, Kaneko, & Koonin, 2014). Though less common in eukaryotes, HGT is not restricted to bacteria. For example, a study found that ferns adapted to shade by horisontal transfer of a gene from the moss-like hornworts from which they diverged 400 million years ago (Li et al., 2014). HGT may even have played a role in the evolution of humans, with seaweed-digesting genes from ocean bacteria having found their way into the gut microbes of Japanese individuals (Hehemann et al., 2010).
In other words, categorizing the standard genetic code as an example of common design is not an ad hoc rationalization; rather, there is a good engineering reason for reusing the code.
A code facilitated by molecular machines
Microbiologist Franklin M. Harold describes the genetic code as “one symbolic language translated into another, with the aid of a universal apparatus of quite phenomenal sophistication” (2014, p. 222).
And indeed, the molecular machinery for translating DNA into proteins is quite impressive: In bacteria, the process requires RNA polymerase to unwind the DNA double helix and transcribe its sequence to messenger RNA, sigma factor to regulate the activity of RNA polymerase, 20 different types of transfer RNA, one for each amino acid; as well as the ribosome, the protein synthesis factory of the cell, where the messenger RNA and the matching transfer RNA’s are lined up, and the amino acids linked together, assembly-line style.
In eukaryotes, the process is even more complicated.
However, a purely physical description of the complexity involved in protein synthesis ignores the conceptual exceptionalism of the genetic code, as pointed out by Harold: One symbolic language translated into another.
This makes the genetic code a prime candidate for design. As Mike Gene points out, “experience has shown us that codes typically are the products of mind, and non-teleological forces do not generate codes. In fact, if the genetic code is taken off the table, there is no evidence that a conventional code employing a linear array of symbols has ever been spawned by a non-teleological force.” (2007, p. 281)
An exceptionally good code
Is there a reason why, say, “GUU” should specify valine? Or is the standard genetic code little more than a “frozen accident”?
Even a casual look at the genetic code indicates that there is a method in the madness. Thus, one amino acids is often specified by similar codons, as in the case of leucine, which is specified by CUU, CUC, CUA, and CUG. Thus, a substitution mutation in the last letter of the codon will have no effect on which amino acid is specified.
This logic extends to a deeper level. For example, a mutation in one of the other letters may result in phenylanine (UUU, UUC), an amino acid with similar chemical properties to leucine.
In other words, the standard genetic code seems to be constructed in such a way as to make the organism robust to the effects of mutations. But is there a way to quantify the level of this optimization and compare the standard code to other possible codes?
In 2000, a team of scientists led by Stephen J. Freeland of Princeton University published such an analysis. They concluded that with respect to substitution mutations, the standard genetic code “appears at or very close to a global optimum for error minimization: the best of all possible codes” (Freeland et al., 2000, p. 515).
Substitution mutations are not the only type of mutation, though. In a frameshift mutation, a letter is added or deleted, disrupting the reading frame downstream of the mutation site. The result is a random string of amino acids that can gunk up the cell. Especially harmful are frameshift mutations that eliminate the “stop” codon, resulting in a string of random gunk that can be quite long.
The standard genetic code has as many as three “stop” codons, which seems excessive, considering that there is only one “start” codon. But having three “stop” codons instead of one increases the chances that a new “stop” codon will be encountered downstream in the case of a frameshift mutation.
The standard genetic code is made even more robust to frameshift mutations by having the sequence of the “stop” codons overlap with those of the codons specifying the most abundant amino acids. This feature, as Itzkovitz and Alon conclude, makes the standard genetic code “nearly optimal” at minimizing the harmful effects of frameshift mutations:
“We tested all alternative codes for the mean probability of encountering a stop in a frame-shifted protein-coding message. We find that the real genetic code encounters a stop more rapidly on average than 99.3% of the alternative codes.” (Itzkovitz & Alon, 2007, p. 409)
An interesting perspective is provided by a recent study by Geyer and Mamlouk (2018). Comparing the standard genetic code with one million random codes, they found that when measuring for robustness against the effects of either point mutations or frameshift mutations, the standard genetic code is “competitively robust” but “better candidates can be found”. However, “it becomes significantly more difficult to find candidates that optimize all of these features – just like the SGC [standard genetic code] does.” The authors conclude that when considering the robustness against the effects of both point mutations and frameshift mutations, the standard genetic code is “much more than just ‘one in a million’.”
The genetic code is likely the result of a compromise between providing robustness against several types of mutations. If the standard genetic code is the product of a teleological process, I expect future analyses which incorporate and compare different types of robustness – like Geyer and Mamlouk (2018) – will further support the optimality of the standard code.
In the meantime, we can conclude that the standard genetic code is exceptionally good – one in a million and possibly better.
Did the genetic code evolve? As we have just seen, the genetic code displays a remarkable level of optimization when it comes to protecting the organism from the effects of mutations. This fact is hard to reconcile with a view of the genetic code as nothing but a frozen accident. If the genetic code was not engineered, it must have been optimized by natural selection, going through countless codes before happening on the one employed by life today.
But we find no evidence of this long trek through the fitness landscape. All life today employs the same code, and the few variants that exist are all derived from the standard code, not precursors to it.
It is possible that the lineage in which the standard genetic code arose drove all those other lineages with precursor codes extinct. But that is hard to square with the fact that variants of the code exist today, with no evidence of being driven to extinction by their superior-coded competitors. Changing an organism’s genetic code may be hard (as evidenced by the limited number of variants), but once changed, the variant does not seem to significantly decrease the organism’s fitness. At least not to the extent where one genetic code drives all competitors around the globe to extinction.
Other explanations can no doubt be formulated. But explaining the absence of evidence only establishes the possibility of the code being the product of an evolutionary process (a possibility I already accept). They do not establish that such a process actually took place.
Conclusion and perspectives
The genetic code is a prime candidate for design. It is a symbolic language, translated into another by molecular machines. The universality of the code can easily be explained as a case of common design, as there is a good engineering reason for reusing the code. Furthermore, the code appears to be exceptionally good at protecting organisms from the effects of mutations – one in a million or better.
The evidence is thus consistent with a scenario in which the first life on Earth consisted of a diverse population of engineered cells, all of which used the standard genetic code. The variants of the code which we observe today are secondarily derived from this original code.
This scenario generates predictions, potentials for falsification, and avenues for further research.
For example, Freeland et al. (2000) conclude that the standard genetic code can only be considered “the best of all possible codes” if the set under consideration is limited to those codes where amino acids from the same biosynthetic pathway are assigned to codons sharing the first base, for which the researchers give the historical explanation that the current code is expanded from a primordial code. If this pattern persists (i.e. it is not an artifact of the researchers only looking at robustness against substitution mutations) the teleological scenario would expect there to be good engineering reasons to group amino acids from the same biosynthetic pathway together like this. On the other hand, if more sophisticated models underscore the need for historical explanations and/or show the standard genetic code to be mediocre, the teleological scenario will be in trouble.
The teleological scenario also predicts that all organisms have the standard genetic code or derivatives thereof. Scientists estimate that Earth has about one trillion microbial species, with 98 percent yet to be discovered. If, as we start finding and studying those, we find variants that are precursors to the standard genetic code, the teleological scenario will once again be in trouble.
Thus, we see that teleological explanations, rather than being vacuous “the designer did it” proclamations, can generate testable insights about nature.
Crick F.H.C., 1968, “The Origin of the Genetic Code”, Journal of Molecular Biology38(3):367-379
Dobzhansky T., 1973, “Nothing in Biology Makes Sense except in the Light of Evolution”, The American Biology Teacher35(3):125-129
Freeland S.J., Knight R.D, Landweber L.F., & Hurst L.D., 2000, “Early Fixation of an Optimal Genetic Code”, Molecular Biology and Evolution17(4):511-518
Geyer R. & Mamlouk A.M., 2018, “On the Efficiency of the Genetic Code after Frameshift Mutations”, PeerJ 6:e4825
Gene M., 2007, The Design Matrix: A Consilience of Clues, Arbor Vitae Press
Harold F.M., 2014, In Search of Cell History: The Evolution of Life’s Building Blocks, University of Chicago Press
Hehemann J.-H., Correc G., Barbeyron T., Helbert W., Czjzek M., & Michel G., 2010, “Transfer of Carbohydrate-Active Enzymes from Marine Bacteria to Japanese Gut Microbiota”, Nature464(8937):908-912
Hinegardner R.T. & Engelberg J., 1963, “Rationale for a Universal Genetic Code”, Science142(3595):1083-1085
Itzkovitz S. & Alon U., 2007, “The Genetic Code is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences”, Genome Research17(4):405-412
Knight R.D., Freeland S.J., & Landweber L.F., 2001, “Rewiring the Keyboard: Evolvability of the Genetic Code”, Nature Reviews Genetics2(1):49-58
Li F., Villareal J.C., Kelly S., Rothfels C.J., Melkonian M., Frangedakis E., Ruhsam M., Sigel E.M., Der J.P., Pittermann J-, Burge D.O., Pokorny L., Larsson A., Chen T., Weststrand S., Thomas P., Carpenter E., Zhang Y., Tian Z., Chen L., Yan Z., Ying Z., Sun X., Wang J., Stevenson D.W., Crandall-Stotler B.J., Shaw A.J., Deyholos M.K., Soltis D.E., Graham S.W., Windham M.D., Langdale J.A., Wong G.K.-S., Mathews S., & Pryer K.M., 2014, “Horizontal Transfer of an Adaptive Chimeric Photoreceptor from Bryophytes to Ferns”, Proceedings of the National Academy of Sciences111(18): 6672-6677
Osawa S., Jukes T.H., Watanabe K., & Muto A., 1992, “Recent Evidence for Evolution of the Genetic Code”, Microbiological Reviews56(1):229-264
Takeuchi N., Kaneko K., Koonin E.V., 2014, “Horizontal Gene Transfer Can Rescue Prokaryotes from Muller’s Ratchet: Benefit of DNA from Dead Cells and Population Subdivision”, G3 (Bethesda)4(2):325-339
The claim is often made that teleological explanations are inherently incapable of leading to predictions, as whatever is observed can be explained with reference to the designer “working in inscrutable ways”.
I’m not terribly impressed by this argument, for one reason: I have successfully used teleological reasoning to derive predictions from.
Years ago, I started looking at homeobox genes from a teleological perspective. Homeobox genes play a key role in development, initiating the developmental cascades that lead to structures such as eyes, hearts, and legs. Homeobox genes have been referred to as the “genetic tool kit” and shows suprising homology across Metazoa. For example, the homeobox gene distal-less regulates both the legs a fly, the paws of a mouse, and the tube “feet” of a sea urchin, even though all of these structures are thought to have evolved independently of each other.
In 2005, homeobox genes were thought to have been present in the last common ancestor of Bilateria (animals which are bilaterally symmetrical). But using the teleological notion of front-loaded evolution, I hypothesized that the genetic tool kit of homeobox genes were even older than that, predating the origin of multicellularity. As I wrote, in August 2005:
If we assume that eukaryotes were designed with the purpose of giving rise to multicellular organisms, we can make certain predictions. For one, we would expect the first eukaryotes to have contained a predecessor to the modern tool kit, and it’s possible that some unicellular eukaryotes still possess it. It will probably not be the full set possessed by modern organisms (or rather, full sets, as several organisms differ in the number of genes they have), as some genes may have been generated through gene duplications, but I definately [sic] expect genes that are clear precursors to modern tool kit genes to be found in unicellular eukaryotes.
I have not been keeping up with the literature for the last several years. However, in checking up on things, I have made the pleasant discovery that my prediction has been fulfilled.
In a paper titled “Homeodomain proteins belong to the ancestral molecular toolkit of Eukaryotes”, the French researchers Romain Derelle, Philippe Lopez, Hervé Le Guyader, & Michaël Manuel present evidence that the last common ancestor of Eukaryotes, the “Ur-eukaryote”, contained genes for homedomain proteins.
The researchers found that homeodomain proteins “are present in all eukaryotic lineages containing multicellular organisms, and absent in exclusively unicellular lineages.” In other words, homeodomain proteins do not appear to play an important role in unicellular organisms.
But the researchers carried out a phylogentic analysis and concluded that the multicellular organisms with homeodomain proteins (animals, plants, algae, and fungi) had all inherited them from the uicellular “Ur-eukaryota” while those lineages who remained unicellular lost their copies through reductive evolution. The findings make the researchers suggest that “eukaryotes as a whole are preadapted for multicellularity”:
As a corollary of ancestral molecular complexity, Ur-eukaryota probably possessed many of the good building blocks, which were subsequently recruited, by convergence in several lineages, to perform the functions required for development of multicellular organisms. In other terms, we suggest that the eukaryotes as a whole are preadapted for multicellularity, which only means that the ancestral complexity of the eukaryote genome and cell biology facilitated multiple acquisitions of multicellularity.
As these findings were published two years after I made my prediction, I feel pretty good about the heuristic value of a teleological perspective.
Derelle R., et al., 2007, “Homeodomain proteins belong to the ancestral molecular toolkit of Eukaryotes”, Evolution & Development, 9(3):212-219
In his seminal Origin of the Species, Charles Darwin used the evolution of the lung from the swim bladder in fish as an example of how organs could change their function in the course of evolution:
“The illustration of the swimbladder in fishes is a good one, because it shows us clearly the highly important fact that an organ originally constructed for one purpose, namely flotation, may be converted into one for a wholly different purpose, namely respiration. […] All physiologists admit that the swimbladder is homologous, or ‘ideally similar,’ in position and structure with the lungs of the higher vertebrate animals: hence there seems to be no great difficulty in believing that natural selection has actually converted a swimbladder into a lung, or organ used exclusively for respiration.” (Darwin, 1859, pp. 220-1)
“There seems to be no great difficulty in believing that natural selection has actually converted a swimbladder into a lung,” Darwin insists. And indeed, it is easy to believe that the transition from swim bladder to lung took place – I certainly know of no law of nature which prevents it.
But Darwin was wrong. Lungs did not evolve from swim bladders – swim bladders evolved from lungs. As the paleontologist Stephen Jay Gould writes:
“A reconstruction of vertebrate branching order gives a clear answer to this question: Darwin was wrong; ancestral vertebrates had lungs. […] The first vertebrates maintained a dual system for respiration: gills for extracting gases from seawater and lungs for gulping air at the surface. A few modern fishes, including the coelacanth, the African bichir Polypterus, and three genera of lungfishes, retain lungs. One major group, the sharks and their allies, lost the organ entirely. In two major lineages of derived bony fishes – the chondrosteans and the teleosteans – lungs evolved to swim bladders by atrophy of vascular tissue to create a more or less empty sac and, in some cases, by loss of the connecting tube to the esophagus (called the trachea in humans and other creatures with lungs).” (Gould, 1993, p. 114; my emphasis)
In pointing to this discrepancy it is not my intent to criticize Darwin’s powers of reasoning. Indeed, as the next example will show, it is common for the human mind to imagine complex transitions that never took place.
In his 2011 critique of Michael Behe’s book, Darwin’s Black Box, professor of biology John McDonald takes Behe to task for claiming that structures exhibiting what Behe calls “irreducible complexity” cannot evolve. In his book, Behe used a mousetrap to illustrate the concept of irreducible complexity. “If any one of the components of the mousetrap (the base, hammer, spring, catch, or holding bar) is removed, then the trap does not function. In other words, the simple little mousetrap has no ability to trap a mouse until several separate parts are all assembled. Because the mousetrap is necessarily composed of several parts, it is irreducibly complex.” (Behe, 1996)
Contra Behe, John McDonald lays out an account of how a mousetrap could arise from a single piece of string wire through a series of modifications:
“Here I show how one could start with a single piece of spring wire, make an inefficient mousetrap, then through a series of modifications and additions of parts make better and better mousetraps, until the end result is the modern snap mousetrap.” (McDonald, 2011)
McDonald’s point may be to criticize the concept of irreducible complexity, but he inadvertently also makes another point: It is possible to imagine a gradual evolutionary transition leading to a structure, even when that transition never took place.
So what’s the point of all this?
In discussions about evolution, a teleologist will frequently claim that this or that feature “couldn’t possibly have evolved”, challenging critics to come up with a scenario for how it evolved.
I believe this is not a fruitful way to resolve the issues. As the examples above show, an evolutionary scenario can be imagined, even when it did not happen.
For the record, I accept that conventional evolutionary mechanisms could have produced all of the features of life we observe – and also many we do not observe. What I am interested in figuring out is what happened.
Behe M.J., 1996, Darwin’s Black Box: The Biochemical Challenge to Evolution, Simon & Schuster
Darwin C.R., 1859, The Origin of Species, a facsimile of the first edition, Gramercy Books
Gould S.J., 1993, “Full of Hot Air”, in Eight Little Piggies: Reflections in Natural History, W.W. Norton
For many, the phrase “intelligent design” conjures up images of scientists, politicians, or pastors making dubious anti-evolution arguments with the goal of getting creationism taught in public schools.
Therefore, let me emphasize that this is not an anti-evolution blog. I consider Charles Darwin’s contribution as among the biggest – if not the biggest – to our understanding of life. I see my own views as expanding our understanding of evolution, not toppling it.
This is also not a religious blog. I am an agnostic with no stakes riding on life being engineered. In fact, my worldview would be a lot simpler if life was solely the product of geochemical, non-teleological processes. I see evidence of engineering in life, but I know of no way in which science could get reliable information about the engineer.
And finally, this is not a policy blog. I do not consider my views as science, and I do not think they should be taught in schools.
For these reasons, I do not consider myself as part of any “Intelligent Design Movement”.