Jay Johnson
Jay Johnson spent 15 years as a journalist and publishing executive before embarking on a second career teaching English in the juvenile justice system. Jay’s love of kids and education took him to BioLogos in 2016 to research the connection between evolution, Young Earth Creationism, and the alarming loss of faith among the younger generation. You can find him at on the web at www.becomingadam.com or on Twitter @BecomingAdamCom.

Certainly nothing offends us more rudely than this doctrine (of original sin); and yet, without this mystery, the most incomprehensible of all, we are incomprehensible to ourselves.

Blaise Pascal

We live in remarkable times. Mysteries that baffled almost every previous generation — the origins of our planet, the stars, and most of all ourselves — are slowly being unraveled, yet the mystery of original sin seems as intractable now as it did to Pascal 350 years ago. 

Our current understanding of evolution only deepens the mystery. It’s not hard to understand how an individual could “fall” into sinfulness, but humanity appeared as a population. How could a population all “fall” at the same time? This conundrum causes Christians who acknowledge our evolutionary origins to fall into two camps. (Forgive the pun.) Some, like Pete Enns, reject the concepts of the “fall” and original sin. Lord knows, the Orthodox tradition has managed fine without them, but Protestant and Catholic theologians are quite attached to the ideas. 

The other end of the spectrum insists upon a literal Adam and Eve, usually based on Romans 5, so they posit that God revealed himself and his “law” to a representative couple at some point in human evolution. William Lane Craig’s recent book, In Quest of the Historical Adam, places the first human couple about 750,000 years ago, while Joshua Swamidass’s Genealogical Adam & Eve attempts to locate them within the 6,000-year timeframe acceptable to Young Earth Creationists (YEC). Craig is within range of the origins of speech and symbolism, but otherwise his first pair is so removed from what we would recognize as “modern” humans that he resorts to special pleading and miraculous intervention – God gave Adam and Eve a supernatural leap forward in brain and language development to make them “human” with a soul. As noted in a Science magazine review, “biologists are likely to be highly skeptical of the idea that humanness is a binary condition that can be induced by a change in a single pair of ancestors.” 

Swamidass’s solution is no better. By the time he locates the origin of sin, humanity already has invented agriculture, built temples, and established city-states that traded with one another and struggled for supremacy. Saying that humanity just prior to 4000 B.C. was somehow free from sin creates more problems than it solves. 

Other evangelical scholars, such as John Walton and Derek Kidner, favor a representative Adam and Eve chosen from a larger population sometime between the extremes of Craig and Swamidass, but all these scenarios suffer from the same problems – special pleading and wishful thinking about human history.

Faced with these alternatives, it’s tempting to throw up one’s hands, but as Reinhold Niebuhr observed, original sin is “the only empirically verifiable truth of Christian faith.” Everyone is a sinner. The question is why. I’d like to propose a different way out of our present dilemma. What if it is possible for the human population to “fall” together at roughly the same time? And what if it’s not only possible, but an idea supported by scripture, science, and everyone’s personal experience?

The overall story is fairly straightforward. Parsimony is a good thing. Begin with the concept that God desired to make a creature capable of fellowship and love both for God and others. “Let us make…” is a statement of purpose, of telos. Likewise, the imago Dei is a vocation that all of humanity is called to fulfill. But we could not achieve that end without mature moral judgment – the knowledge of good and evil. The animal kingdom exhibits behaviors humans would label “good” or “evil,” yet neither we nor God hold them morally responsible for those choices. Animals, like infants, are “innocent,” which Kierkegaard rightly observed simply means “ignorant.” Human evolution gradually moved toward greater and greater levels of sociality, communication, and “love,” but we learned those behaviors over millennia, and for most of that time, “good” and “evil” were simply behaviors, not abstract concepts. Humanity from erectus to early sapiens was in a transitional period that resembled childhood. Like children, their brains were still developing, and they were learning language and morality, but none of those capacities had reached the point that either God or modern adult humans would consider them “guilty” of moral evil. There’s a reason why societies don’t put 6-year-olds in jail. The final phase in moral guilt is represented by the woman’s mature reasoning and selfish choice in Genesis 3, followed by shame and consequences. Innocent Animal-Immature Child-Guilty Adult. It really is that simple.

What follows is a shortened version of my proposed solution to the mystery of a historical “fall” and original sin in an evolutionary context. For the sake of readability, I’ll forego footnotes. My fuller argument appears in the Canadian-American Theological Review. 

In attempting to wrap our brains around complex concepts, people resort to metaphorical thought. Typically, we take what is complex and compare it to something from everyday experience. The mind requires a familiar peg on which to hang its hat. Following George Lakoff and Mark Johnson’s book, Metaphors We Live By (1980), this thought process acquired the name “conceptual metaphor.” By definition, “conceptual metaphor is understanding one domain of experience (that is typically abstract) in terms of another (that is typically concrete).” An image metaphor, such as hanging a hat on a peg, simply describes, but a conceptual metaphor forms multiple mental connections from one domain to the other. Say “love is a journey,” and a whole host of ideas related to journeys come to mind and are associated with love. 

Scientists often use conceptual metaphors to explain complex subjects. DNA, for example, frequently is compared to written language, which immediately calls to mind words, sentences, punctuation, information, transmission, and change (mutation). On the complex subject of human evolution, the conceptual metaphor of childhood development/maturity frequently appears as a framework for understanding. 

Scripture likewise employs conceptual metaphor, whether Paul exhorting his readers to put on the “armor of God” (Eph 6:11), the Psalmists complaining that “human life is grass” (Ps 103:15) or “God is my shepherd” (Psalm 23). Many thoughts are associated with such statements. Often, a single passage will contain more than one conceptual metaphor. Within Genesis 1, for instance, the metaphors of creation as a temple and creation as work both figure into the interpretation of the text. Rather than a scientific treatise, Scripture provides us with multilayered metaphors to help wrap our minds around the meaning of God’s creation and humanity’s role in it as imago Dei.

Similarly, Genesis 2–3 employs a conceptual metaphor to explain humanity’s “fall” and alienation from God. Since the action in the garden narrative begins with “the man” naming the animals (language) and climaxes with the human couple’s acquisition of the “knowledge of good and evil” (morality), it should come as no surprise that the conceptual metaphor in Gen. 2-3 is moral maturity. 

Biblical scholars have long recognized that the man (ha’adam) and the woman (ha’issah) in Gen. 2-3 are literary archetypes, representing both the universal “original pattern” and every individual’s recapitulation of that experience. Both humanity as a whole and every human child has walked the same path of brain, language, and moral development to reach the point of mature moral decision-making. Sinfulness is both communal (systemic) and individual, and all of us are guilty.

Human language involves two kinds of sharing. First, everyone must agree what words mean and how to use them, and second, we must agree that the information we share is truthful. Without meeting both conditions, human languages could not function. Human languages are thus “socially shared symbolic systems” that rely upon cooperation for their use. 

Besides language, two other unique features of human social lives rely on cooperation. The first is “intersubjectivity,” which is an umbrella term for a suite of capacities that require joint action, joint frame of reference, and empathy. To work together in joint action, people must agree on a shared goal, which involves a bit of “mind reading” that other primates can’t duplicate. 

Furthermore, chimps do not hold up objects for other chimps to consider, but people will say things like, “Look at that beautiful sunset.” When we use joint frames of reference such as this to share our experiences or emotions with another person, it goes by the name of “empathy.” 

Morality is the second feature of human sociality that relies on cooperation. For morality to exist, people must agree what constitutes “right” or “wrong” behavior, establishing a joint frame of reference, and they must agree what to do when those standards are violated, which requires joint action. 

In an evolutionary context, it’s important to remember that language evolved along with the body, vocal tract, and brain. Language didn’t spring into existence in the modern form that we recognize. Just as children begin with pointing/informative gestures, which animals that lack language don’t understand, the earliest form of “language” was most likely a combination of gestures and a few simple words, i.e. names. It’s best to call this a protolanguage to avoid confusion. From there, language evolution followed a path similar to childhood language acquisition: 1) One-word stage; 2) Two-word stage; 3) Hierarchical structure but lacking subordinate clauses and embedding; 4) Flexibility/Recursivity; and 5) Fully modern grammar. The first three stages should be considered protolanguage. (Children typically acquire the full grammar of their native language by the age of 5.)

The man’s first act upon his creation was linguistic, naming the animals, so by analogy one could reasonably conclude that what distinguished the first “humans” from previous hominids was the use of spoken words.

The first requirement for speech is walking upright. Bipedalism not only allowed the larynx to descend, it relieved the thorax of its support function while running and allowed our early ancestors to control their breathing to vocalize words. The second requirement is a modern spine with enough space to house the nerves that control those respiratory muscles. H. erectus from Dmanisi (1.75 million years ago) had such a spinal column. The third requirement is a hyoid bone similar to ours. The hyoid is necessary for vowel sounds in human speech. Australopithecus possessed a hyoid similar to chimpanzees. Neanderthal and heidelbergensis have a hyoid identical to sapiens, while late erectus is intermediate. The implication is that late erectus around a million years ago was capable of speech, but not yet capable of the full range of sounds modern humans can produce. They used a combination of gestures and word-sounds that probably resembled a rhythmic “hmmmm” more than anything else.

Science calls all members of the genus Homo “human,” but by analogy to Genesis 2, I suggest the first member of the human family was late H. erectus. All of our hominin relatives from that point would be considered human, although, like children, they were immature and still developing.

Converging evidence

Since words don’t fossilize, placing dates on the evolution of language is notoriously difficult. Luckily, some converging lines of evidence point in the same direction. Prehistoric trade networks provide some of the best indirect evidence of language evolution. Prior to 1 million years ago, natural resources for food and tools came from within the “home-range” of hominins, which was a radius of about 13 kilometers. The same still holds true for chimps and other living primates. 

Around 1 million years ago, trade networks suddenly appeared, meaning the stone to produce a tool found in one place could come from as far away as 100 km. The existence of trade implies lessened aggression and an improved form of communication. Since the physical requirements for speech were present by this time, it’s reasonable to conclude that protolanguage came into existence around 1 million years ago. Trade networks expanded to 300 km around 100,000 years ago in Middle Stone Age Africa, and by 35,000 years ago transfer distances had increased to as much as 800 km. These dates indicate some sort of language breakthrough around 100 ka and again at 35 ka. Interestingly, Neanderthal trade networks never extended beyond 75 km. They likely spoke a protolanguage throughout their existence.

The evidence for symbolic behaviors roughly mirrors the dates outlined for trade networks. A few very early, disputed examples of symbolic behavior (eagle talons as jewelry, possible graphic marks) appear prior to H. sapiens, but clearly symbolic elements (shell beads, ochre “body paint,” graphic symbols, grave goods) don’t show up until about 100 ka in South Africa, and by 40 ka, the archaeological record shows an explosion of representative art, “Venus” fertility figurines, musical instruments, and other unquestionably symbolic artifacts.

Finally, the phenomenon of “globularity” appears in the sapiens lineage around 100,000 years ago. One distinguishing feature of Neanderthal is that it could be described as a large-brained/large-faced species. H. sapiens, by comparison, has a relatively small face, a feature found in a fossilized skull from Jebel Irhoud, Morocco. Dated around 300 ka, the skull initially puzzled scientists, who were unsure how to classify it. The complicating fact was that the skull was elongated, like Neanderthal and every previous hominin, while that of modern sapiens is shaped like a globe. Both Neanderthal and sapiens infants are born with nearly identical elongated braincases, but in the first year of life, the rapid growth of the modern human infant’s cerebellum, parietal lobe, and frontal pole reshapes the skull into our distinctive globular pattern. 

Although brain volume of the Jebel Irhoud fossil fell within the range of present-day humans, “brain shape evolved gradually within the H. sapiens lineage, reaching present-day human variation between about 100,000 and 35,000 years ago. This process . . . paralleled the emergence of behavioral modernity as seen from the archeological record.”1Simon Neubauer, Jean-Jacques Hublin, and Philipp Gunz, “The Evolution of Modern Human Brain Shape,” Science Advances 4, no. 1 (2018). As one of the early researchers into globularity called it, this is the “language-ready brain.” Computational analysis of the brains of modern H. sapiens and Neanderthal found that they had smaller cerebellar hemispheres than us. “Although both species have similar total brain volumes, a globular brain confers distinct advantages: Larger cerebellar hemispheres were related to higher cognitive and social functions including executive functions, language processing and episodic and working memory capacity.”2Takanori Kochiyama et al., “Reconstructing the Neanderthal Brain Using Computational Anatomy,” Scientific Reports 8, no. 1 (2018): 1–9.

In the evidence above, note the reoccurring dates 100 ka and 35 ka. Trade networks, symbolic behaviors, and the globular brain all are associated with that period. From these converging lines of evidence, it seems protolanguage began around 1 million years ago and continued to evolve until a breakthrough to “modern language” and symbolism around 100 ka. Language and symbolicity continued to co-evolve with the globular brain to bring us to “full modernity” about 60,000 years later.

On top of everything else, the same process granted humans the ability to fully share our thoughts and emotions with another person — a type of communication we learned to call “love.” Intention-reading, which Michael Tomasello credits with providing the evolutionary motivation to speak, involves not just a shared frame of reference (“Look at that beautiful sunset . . . .”), but an inborn instinct to share our psychological state with others.

What is the significance of these developments? Simply, for the “fall” to occur, humanity had to reach a certain level of sophistication in language and symbolic thought. Mature human morality is rooted in our capacities to symbolize actions and generalize them to an abstract category. Cognitive neuroscientist Peter Tse explains, “The birth of symbolic thought gave rise to the

possibility of true morality and immorality, of good and evil. Once acts became symbolized, they could now stand for, and be instances of, abstract classes of action such as good, evil, right, or wrong.”

My rough estimate of the timeframe for the “fall” is 65 ka, give or take 10,000 years. This proposal lines up with certain key dates. It’s just prior to the “Out of Africa” migration, and it also hits the midpoint of the process of globularity, the expansion of trade networks, the use of symbols, and after 45 ka the proliferation of representative art, “religious” symbolism, and novel technologies (behavioral modernity). 

The nature of the “fall” 

Traits tend to be fixed in a small population. Prior to the “Out of Africa” migration, early humans lived in groups between 50-100, and the total population of 20,000 or so was geographically dispersed. Nevertheless, we know they traded with neighboring groups, who traded with neighboring groups, and so on. It’s not hard to envision sapiens in Africa and the Levant connected in a loose exchange of goods, technology, language, and culture. To state the obvious, they also likely exchanged brides as signs of kinship and good will. Just as globularity came to characterize the sapiens population, other behavioral traits related to language and morality (crucial aspects of culture) would spread through the small population and become universal. Thus, a sub-group within a population could achieve a language breakthrough that allowed a new way of thinking metaphorically using abstract language, which quickly spread to their neighbors, who spread it to their neighbors, etc. The transmission of sinfulness only makes sense in such a cultural model. 

Logically, one could point to a “first” morally culpable sinner among a population, but that’s not much different than saying that an individual within a population had the first mutation that eventually became fixed within the population. The mutation still must spread to the rest to have an impact and become meaningful. The “fall” wasn’t immediate and simultaneous, just as the man wasn’t rendered guilty as soon as the woman ate of the fruit. There was a period of time, who knows how long, when humans were just beginning to think metaphorically and abstractly about the codes of behavior they’d inherited from previous generations. One can call this a “probationary” period, or just the fuzzy demarcation between maturity and immaturity. It’s not hard to understand. We’re intimately familiar with the process. Children follow the same pattern. Their brain and language development reaches a point where they’re capable of metaphoric thought, usually between the ages of 8-10, but that breakthrough still takes a few years to impact their moral reasoning. 

Moral maturity is the capacity to freely choose between evil and good, based on past experience of making choices and experiencing consequences, but moral guilt (the point of the Garden story) appears when a mature person is faced with a hard decision and makes a selfish choice. Despite the fact that all have walked the same path, every morally responsible bad decision, from the first to the last, was a unique decision, depending on the person and the circumstance. Choice always existed, both individually and corporately, because the creation of morally mature individuals requires them to make morally significant choices. It’s a form of training, one could say. Cooperation and selfishness grew side-by-side in human evolution, just like the wheat and the tares. God brought humanity, and every individual, to his desired goal of mature decision-making, and outside of revealing himself and coercing everyone to choose the good, he did everything possible to prime us to reject evil. (See Isaiah’s parable of the vineyard.) The freedom to choose evil is built into our constitution, culture, and personal experience, and human history has overwhelmingly demonstrated that sooner or later, we choose our own way. God can judge us because his good creation granted us the experience and ultimately the freedom to choose. Even if the choice seemed predictable (or even inevitable) from the outside, it was nevertheless real and consequential for each and every individual, as Kierkegaard pointed out.

If human biological evolution (globularity) and cultural development (linguistic, moral) brought the small population of sapiens in Africa to the point of moral maturity, then a universal fall that quickly engulfed all of them isn’t a logical leap. It’s the next logical step. Once the first generation(s) of humans became capable of morally mature reasoning, what follows is a truly culpable choice between good and evil. That natural progression is as true of humanity as it is of every modern child. Likewise, if God intended to make a creature capable of freely choosing to love both himself and others, and capable of freely choosing to do good rather than evil, then the evolutionary pathway that we observe seems the obvious cost of creating such a being. 

The woman’s fall. Why are we all sinners?

The introduction of the serpent — the craftiest of YHWH God’s creatures — abruptly changes the direction of the narrative in Genesis 3. The temptation the snake represents is threefold: First, it questions the “rightness” of the command; second, it denies the consequences of disobedience; third, it questions the motives of the lawgiver. As the man and the woman are archetypes, so is their temptation and fall. 

In his 1932 classic, The Moral Development of the Child, Jean Piaget studied children of various ages playing games and concluded that the younger ones regarded rules “as sacred and untouchable, emanating from adults and lasting forever. Every suggested alteration strikes the child as a transgression.” This matches quite well the attitude of many interpreters toward the command not to eat from the Tree of Knowledge. The first humans should have accepted it without question, obeyed it and, presumably, lived forever in paradise. But is unquestioned acceptance of the rule truly a mature moral choice? I’d suggest that condition belongs to the state of childhood. 

Updating Piaget’s work, developmental psychologist William Kay observed, “A young child is clearly controlled by authoritarian considerations, while an adolescent is capable of applying personal moral principles. The two moralities are not only clearly distinct but can be located one at the beginning and the other at the end of a process of moral maturation.” In what could be called the first instance of peer pressure, the serpent introduced doubt from the outside, and the woman determined her personal moral principles vis-à-vis the command. She applied her own moral judgment, a phenomenon that begins in adolescence and continues throughout the rest of life, and weighed whether the rule was hypothetically non-binding and contrary to her own self-interest (the fruit was “good for food and pleasing to the eye, and also desirable for gaining wisdom”). The universal nature of temptation and sin appears at the end of a process of moral maturation that all children undergo. In the end, the adolescent applies her own moral principles, considers her self-interest, and declares her independence, albeit prematurely. In the second instance of peer pressure, the man takes the fruit from the woman and eats it without apparent thought. If everyone else is doing it, me too! It’s an image of the cultural transmission of sin. 

Although the Western church traditionally has viewed the first humans as adults at their creation, the nature of their disobedience better fits Irenaeus’ conception of them as children. The “fall” as presented in Genesis 2–3 perfectly replicates the moral transition from childhood to adolescence. Another creation text, Proverbs 8, says “the fear of the Lord” is to hate evil. The next lines of the poem provide examples of what that means: “I hate arrogant pride and the evil way and perverse speech.” In Bruce Waltke’s commentary on the verse, he notes that pride is “a self-confident attitude that throws off God’s rule to pursue selfish interests.” What happens when a child begins to question the rules, even if only culturally learned, as well as the motives of the rule-givers? Such is the thought process behind every first “morally responsible” sin—and the archetypal “original sin” of the first humans.

The important thing about the woman’s example is not simply that she broke a command. Disobedience wasn’t the original sin. The “sin before the sin” was the thought process that led her astray, and since she is an archetype, the woman represents all of us in the passage, both collectively and individually. We are all “Eve” in the passage.

The command not to eat from the Tree of Knowledge was a metaphor for the law. What specific commandment did Israel as a whole and every individual Israelite violate? All of them. Individually, Eve and the rest of us violate what we understand of the law at a young age, and that becomes a habitual behavior. Collectively, everyone is guilty of neglecting what knowledge we have of God and propagating unjust traditions and cultural values.

Conceptual metaphors are built into the fabric of human thought as tools to elucidate complex concepts. Scientists routinely employ the metaphor of childhood development to explain the co-evolution of the brain, language, and morality. The comparison is apt; the collective human journey virtually parallels the individual journey of every human. Genesis 2–3 employs the conceptual metaphor of moral knowledge as a “coming of age” and applies it to the man and the woman as literary archetypes in a figurative text. Their symbolic journey from childhood innocence to moral maturity matches the trajectory of both human evolution and every typical child’s moral development.

The conceptual metaphor of maturity resurfaces throughout Scripture, but it becomes especially prominent in the New Testament, where teleios  “describes both the consummated reality (the ‘perfect’ or ‘complete’) and lives lived into that eschatological hope and energized by its partial realization (the ‘mature’). . . . The new creation is the advent of the ‘complete’ (to teleion) and . . . lives oriented to this coming reality are ‘mature.’”3Miroslav Volf and Matthew Croasmun, For the Life of the World: Theology That Makes a Difference (Grand Rapids: Brazos Press, 2019): 153–54. It’s also worth noting that Jesus routinely called his disciples “children.” Has the choice of metaphor in Genesis 2–3 primed us for an evolutionary understanding of human origins? 

Regarding the “fall,” when we realize that the state of “innocence” of the immature human race, just like the immature human being, was one of ignorance instead of perfection, it’s easy to understand how early humanity, like children, could commit sins of ignorance. It’s also understandable how God could overlook such offenses without violating his own justice. Even human societies — imperfect as they are — don’t hold toddlers accountable for breaking the law. The man and the woman were never perfect, and neither were we.

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team!

You have Successfully Subscribed!