Professional scientists, "lay people," and the truth

I recently heard an unusual, horrified outburst: "You'd let lay people administer a scientific experiment?" Scientists have the right to be proud, but not to be too proud to fail. It got me thinking about ivory towers and the supposedly-unassailable authority that media often assigns to science. PhDs and MDs are a smart group of people. They peer into incredibly complicated mechanisms, try to explain the nearly invisible, and hunt down vital defenses against devastating illnesses. Science is increasingly specialized now, and scientists certainly deserve to be respected for their intelligence, dedication, and insight. Scientists, however, are not infallible--a large and necessary part of scientific progress is failure. Otherwise, many important discoveries might not be made. A failure in science is just as valuable as a successful end product (though some frustrated researchers might disagree).

It doesn't matter who makes the discovery if the method is sound and the results can be reproduced countless times by peer review. The same applies to the concept of professional scientists versus amateurs: science is agnostic, and a result is a result no matter who discovers. It only matters how. Gregor Mendel was educated (and incredibly patient), but he lived in a time where there was no professional "accreditation" for scientists. He was just a curious friar who wanted to figure things out. The teenagers who win the annual Siemens Foundation Competition--including seventeen year-old Angela Zhang's cancer stem cell-destroying nanoparticle, or Joshua Kubiak's molecular scaffold that could make mounting chemicals used in medicine more efficient--do not have PhDs, though they were mentored by PhDs. Certainly, it's more likely for someone trained in precise lab techniques and unbiased research design, to produce a remarkable discovery or result (or failure). A bright high school student can still strive to do the same.

Science is always striving towards the truth, but we'll never know if we've reached it. Yes, science assumes there is an absolute truth we strive towards with each experiment. Each rigorous, unbiased (or as unbiased as possible), empirical result gets us a little closer to that truth. If we're wrong, we revise all our operating principles, and it's perfectly fine to change our minds because the empirical evidence has shown otherwise. Science has authority not because there are cartoon characters in lab coats titrating green, bubbling fluid between Erlenmeyer flasks, and not <i>merely</i> because the discoverer has a PhD.

When the media overemphasizes the authority of science in a discovery, it creates assumptions that anti-science groups rely upon to try to discredit the importance of science (and scientific results) in education and debate. That's what I find dangerous about media reports that emphasize that scientists were the ones who made the discovery, and science makes the finding (often misinterpreted) super-true. When this authority is misused to debunk the very discipline itself--for example, when people point out that science shouldn't be trusted because they get things wrong--it's because science has been misused by the media to mean truth and authority, when in fact science is often wrong, and sometimes needs to be wrong. (I'd rather not link to these kinds of sites and give them more traffic, but I am referring to, though not exclusively, the kinds of creationist arguments used against teaching evolution in schools.)

By privileging science as an amorphous, unassailable authority, media creates a mystery around the discipline that discourages people from entering or trying it. Anyone can do science, and that's what's amazing about peer review, because everyone learns together (ideally, instead of sabotaging a competing lab or making up your own data). Creating this kind of mystery is incredibly intimidating for curious thinkers who do not have scientific backgrounds to encourage them to pursue their passions. And sadly, when science is criticized for being "fallible" and less than absolute, these thinkers will be even more discouraged to ask the questions that science could have tried to answer.

Science isn't perfect, and that's because we recognize our own fallibility. Because we're human, egos get in the way, stubbornness about a beloved hypothesis can lead to interdepartmental fights or tenure denial, and that's why science strives to isolate human bias from experiments, and to compensate for our failings.

A result is a result. The degree of truth it contains can really only be measured by rigorous peer review using empirically obtained evidence.

Separation of Science and Religion

Science is agnostic. That's one of the things I love about it. Regardless of your creed or personal faith, science's one true tenet is the truth, and the pursuit of that truth. No matter how painful that truth, you can be assured that, if you were rigorous, thorough, logical, and demanding, that truth is still the best and truest understanding you could hope to achieve with the best of your abilities. It's been through the fire of variable elimination, of subjective bias blindness, of attempting to prove the opposite of the intended result. It has no opinion, it has no sympathy, and it has no permanency.

Truth is only truth for now, as we know it. If gravity were 'disproved' tomorrow (notwithstanding quantum mechanics) by science, we would still accept it (and we did, with quantum). That's scientific truth. It's not always easy or sensible to accept, but it is the cold hard gleaming truth. I talked about politics at a social mixer today. Nothing alarming happened, but the increasing politicization of knowledge, and recontextualizing science as a matter of faith, came up. How do we convince climate change deniers and intelligent designers of the truth?

I argue that science and faith can't meet in a meaningful way on their own grounds. They operate fundamentally on different principles, with different semantic meaning for the same things. For a scientist, truth is merely the best version of what we can tell based on thousands of reproducible experiments and tests, of hard self-questioning and denial of personal bias. In religion, truth is faith based, fundamentally. For many believers, truth is what their religious text tells them, or their religious authority, or what they feel to be true deeply in their heart of hearts. Trying to reconcile these two truths ignores that they can never meet: one is a religious truth and the other is a scientific truth. In the face of scientific evidence, religious truth is unassailable unless someone's faith changes in some way. This is confirmation bias at its most resilient.

This is no new thing under the sun: Stephen Jay Gould's non-overlapping magisteria is another way of putting it. In opposition, Richard Dawkins has been very vocal about John William Draper's conflict thesis, which proposes that religion will always challenge new scientific ideas and produce social conflict. But wouldn't it be better to let each religion and science go their own way, as fundamentally irreconcilable and isolated fields? If we can't get along, we can at least recognize in each other a shared wonder and love of the universe.

A Mosaic of Human(?) Evolution: Australopithecus sediba's Challenges

The anthropology community has been filled with buzz recently about the discovery of a new species, Australopithecus sediba. Is it really an ancestor to modern-day humans? Does it have a human-like brain or an ape-like brain? What do its humanoid hands but ape-like feet mean for the evolution of walking? We may be arguing about these issues for a while, but the completeness of the skeleton and its distinctive blend of early and more modern humanoid features set it on par with Lucy (Australopithecus afarensis) in importance. In a field where many even critical discoveries revolve around no more than a piece of jaw or a corner of a hip bone, this is a prize opportunity to learn more about how we developed the features that set humans apart from chimps and gorillas. For the longest time, the world has known of Lucy, the star of the paleoanthropological world, as our ancestor from about 3 million years ago. Despite many interesting findings since her fateful discovery, either due to the lack of a fossil record from incomplete skeletons or theoretical arguments about our family tree, we haven’t been able to draw a clear timeline of what led from Lucy to the first Homo habilis, the “handyman” that led to our own (Homo) sapiens. This all changed when a dig in Africa produced four fossil skeletons of stunning completion. They are now known as Australopithecus sediba, dated to a little less than 2 million years ago. The star of the show so far has been MH1, a juvenile male with a skull so complete that scientists have constructed a virtual model of its brain.

How can we even know what kind of brain it had if all we get is bone? Scientists used a CT scan of the male skull to create a model of the interior of the cranium. This endocast is constructed from many X-ray scans, rotated 360-degrees around a central point, so that each scan is like a cross-sectional slice of the skull. By digitally modeling the combination of those slices, scientists can deduce what type of brain and therefore what mental capacity it had. Based on that, we can then try to predict whether it used tools, and even speculate on its social organization and its capacity for planning and self-awareness.

A. sediba’s brain challenges what we thought we understood about the evolution of childbirth, bipedalism, and tool use. Some scientists are even claiming that the four fossils aren’t a separate species at all. Anthropologists are arguing not just about where to place A. sediba in our family tree, but about old and established theories about human evolution that have dominated the field for decades.

A Challenge to Childbirth

We originally thought that it was our big brains that caused our pelvises to evolve the way they did. After all, one major constraint on brain size, and therefore head size, is childbirth. How could our enormous brains fit through our tiny pelvises? We compensated for that with severely delayed development: compared to other mammals, human babies are basically born premature. When sheep are born, they can stand up within minutes. For a human baby, that process can sometimes take twelve months. We grow our brains, and the rest of ourselves, outside the womb, whereas other mammals emerge nearly ready-made. As a result, the easiest explanation would be that our pelvis has also rotated and reshaped to accommodate the wider birth canal.

A. sediba has thrown a wrench into this theory, because it has a small head, but its pelvis is still rotated the way a Homo pelvis would be, and yet its narrowness still echoes of Lucy. If we look closely at A. sediba’s pelvis and skull, we find that while its pelvis is a blend of the rotated hominin pelvis and the narrower australopithecine pelvis, its brain measures only 420 cubic centimeters (cc). To put that into context, a modern human brain averages upwards of 1500 cc. Lucy (A. afarensis) had a brain of around 400 cc. A. sediba has a brain size comparable to that of a chimp, clocking in at less than 500 cc on average. This means that even though A. sediba shares the same brain volume as Lucy and chimpanzees, its pelvis (and the rest of it, as we shall later see) was already beginning to change. So if A. sediba didn’t have a big brain to reshape its pelvis, what did it have instead?

A Challenge to Bipedalism

The answer to that question lies outside of the cranium, and requires us to think about how bipedal A. sediba was compared to Lucy or to a modern human. One feature that sets us “above” our ancestors is our ability to rise up and move about on two legs alone. Humans are completely bipedal; for the most part, we do not suddenly decide to switch to all fours in the middle of a meeting, or swing from the pipes on a train platform because it’s easier than walking to the train. Chimpanzees primarily travel on all fours, and although they can occasionally walk around on their hind legs, knuckle walking is much easier for them than for us. Human and ape skeletal features have evolved to suit their nearly locomotion lifestyles, but we see something different when we look at fossils from the transition between the ape-like australopithecine and the modern human.

Bipedalism requires changes in the shoulder blade, the pelvis, the legs, and the feet. Even the neck and spine are involved in upright mobility. Although chimpanzees and humans might share a common ancestor and are not related in any direct line of descent, it’s still useful to compare the chimpanzee’s ape features with our own. Our arms are relatively short compared to our legs, but apes and australopithecines have long upper limbs, with large joints to handle the weight they share with the rear limbs. The thickness and strength of the arm, leg, and wrist bones adjust in humans and in chimps based on how much weight they habitually need to support. On a chimpanzee, the shoulder blade is completely rotated so it can swing between branches, and humans still retain some of that flexible shoulder joint. The thickness of the spinal vertebra and the orientation of the pelvis both shift to accommodate the suddenly vertical load that humans endure in order for us to lift our heads above the crowd of ape-like relatives.

When we take all these comparisons and apply them to Australopithecus sediba, it’s as if we had tripped along the way and tossed all these features together. The sides of A. sediba’s pelvis are more vertical like you would expect of Lucy, and the size is more like Lucy’s, but the shape and angle of the pelvis where it sits in the body is more like a human’s. It also has the strong, long arms of a chimpanzee or an australopithecine, and the large joints of someone used to supporting their weight on their arms. Parts of the hip, knee, and ankle look like they would be best for bipedalism, but the foot looks much more like an ape’s knuckle walking foot. Overall, there is a mix of tree-swinging, knuckle-walking australopithecine and a large, bipedal hominin, with each individual distinct feature creating a confusing bigger picture.

There is no change in brain size or head size that could explain the change in pelvis, but there are changes in the rest of the body that are related to a newfound reliance on bipedalism, rather than swinging from branch to branch or knuckle walking over the ground. These differences happen in an otherwise australopithecine body carrying an australopithecine-sized brain. The old theory of childbirth changing our pelvises may just be untrue, and A. sediba might be the perfect exception that disproves the rule. It might just be possible that bipedalism, and not babies with bigger brains, is the cause for the signature changes in our pelvis that mark the evolution from Australopithecus to Homo. Then again, as many scientists have pointed out, why can’t it be both? The jury’s still out and the papers are still being written.

A Challenge to Tool Use

If walking came before bigger brains, does that also mean it came before smarter brains? The precise origins of stone tools are murky, and even if we see evidence of tool use 3 million years ago, that still doesn’t tell us how we came up with the idea of creating knives or axes out of bits of boulders. Whether a stone broke into a chopping blade by accident, or a few australopithecines started pounding rocks together out of sheer boredom at night (a wonderful image from an old professor of mine), the invention of tools had profound changes on human ancestral physiology.

The stunning endocast created for A. sediba, combined with skeletal evidence from its hands, can tell us a great deal about the changes in brain capacity, diet, and maybe even social complexity as it developed towards human society.

We associate the brain’s frontal lobe with planning, thinking, emotions, and other higher functions. Your frontal lobe stops you from saying something rude, helps you decide not to steal, and recognizes that surprise from a practical joke isn’t a signal for your body to go into survival mode. Many scientists think that planning ahead is a very human thing, and specifically, planning several steps ahead with many other humans. Chimpanzees are known to get a bunch of friends together for precise attacks against other chimpanzees. Baby baboons will fake an injury to get more food. Other primates can be just as devious as we are, but no chimpanzee has ever led a concerted and sustained effort to conduct siege warfare or to coordinate a commodities trading market in bananas. The simplified answer is that they do not have the same frontal lobe organization that we do.

Compared to other australopiths, A. sediba’s brain isn’t remarkable except for its frontal lobes. Like the blend of australopith and hominin features we see in is skeleton, its brain is overall australopithecine, but its frontal lobes have the shadows of future humans to come. Why the change? Its australopithecine cousins also have bipedalism, but their brains don’t harbor these glimmerings of the future man to come. They’re also known for tool use, as early as 3 million years ago, but their brains don’t have this kind of neural reorganization.

We might be lost at this point, if not for A. sediba’s hands and teeth. Its hands don’t completely look like ours, and probably wasn’t as good with precision grip as we were. But remember that A. sediba’s hands were occasionally freed to do other things while it walked around bipedally. Its hands could grasp more than tree branches, at least, and we see that in its human-like thumb to finger proportions. It’s as if an almost-human hand was grafted onto an australopithecine arm.

Another hint comes to us in the form of the juvenile male’s molars. Inside its vertical, human-like face, second molars already developed. Their arrangement is australopithecine, but their size is closer to Homo. The simple supposition is that what A. sediba was eating had changed how large its teeth needed to be. If its diet changed, then the way it gathered or reached those foods had changed too. Bipedalism meant it could see higher in non-wooded areas, and the improved finger dexterity meant it might be in same tool-making tradition we share with the early makers of stone “shovels.”

Whatever its brain, teeth, and hands can tell us about its life, we know that evolutionarily speaking, A. sediba’s brain organization was moving towards Homo before its size had tried for that shift.

A Mosaic of Evolution

The arguments surrounding A. sediba are enormous, complicated, and critical for our understanding of human evolution. Scientists are even arguing that it shouldn’t be classified as Australopithecus, or that it isn’t even a new species at all. That would mean no new species, no changes in existing theory; just an expansion of the range of features we used to assign. Even if it was a new species, Australopithecus sediba might not even be related to us; instead, it could be an example of how another organism has experienced similar environmental pressures to evolve in a similar way. It’s hard to say; there aren’t enough skeletons to let us know for certain. As with any new discovery, there are bound to be hundreds of new theories, new ideas, and new papers written arguing new sides to be taken.

Fossil hominins are an elite and lonely crowd, and their rarity makes every new discovery the next potential Lucy. As exciting and puzzling as A. sediba’s skeleton is, each individual piece of bone pieces together a hodgepodge of theories, ideas, and histories. However it ends up getting classified, the fact remains that paleoanthropologists carefully rescued four isolated skeletons from the darkness of history. In the future, there will hopefully be more like A. sediba, of any species, to transform, challenge, and energize our understanding of our origins and what it means to be Homo sapiens.

References

  • Berger, et al. Australopithecus sediba: a new species of Homo-like Australopith from South Africa. Science 328, 195 (2010).
  • Carlson, et al. The Endocast of MH1, Australopithecus sediba. Science 333, 1402 (2010).
  • Cartwright, J. (2000) Evolution and Human Behavior. Great Britain: Palgrave.
  • Gibbons, A. Skeletons present an exquisite paleo-puzzle. Science 333, 1370 (2011).
  • Kivell, et al. Australopithecus sediba Hand Demonstrates Mosaic Evolution of Locomotor and Manipulative Abilities. Science 333, 1411 (2011).
  • Zipfel, et al. The foot and ankle of Australopithecus sediba. Science 333, 1417 (2011).

Patrilineal & matrilineal tribes illustrate gender gap in spatial abilities depends on females' role in society.

Gender gap in spatial abilities depends on females' role in society by John Timmer, on Nurture affects gender differences in spatial abilities, by M Hoffman, U Gneezy, & J.A. List. Two closely related tribes in Northeast India share many qualities but differ in one respect: one is patrilineal and the other is matrilineal. Three American researchers decided to see if they could use these research conditions to answer some questions about the innate biological differences between men and women. There couldn’t be a more perfect situation of isolated, biologically similar but culturally disparate polities. The first thing I thought of when I saw the article was, "They've found the perfect research situation."

1. Single salient characteristic, all other physical points being equal. To have two isolated groups of people who share lifestyle (stresses, labor patterns, etc.), diet, proximity, and genetics, yet primarily differ culturally, is a research designer's dream. Often you have to account for external factors that may cause you to conflate one effect for something else. In trying to answer a nature-vs.-nurture question like this, you want to get all your variables down to as discrete a level as possible. Unlike other studies that are deliberately taking into account multiple lines of evidence, from culture to economic status to physical behavior to diet, here we have a question that requires us to make sure culture is the only variant (alternatively, biology is the only variant, but that's not the case here).

2. Ability to analyze or account for variation within a population. To have multiple villages within these two (culturally) isolated groups, so as to get a decent sample size and intra-polity variation is fantastic. This means the researchers can better understand of how one group functions absent any expectations generated by their knowledge of the other group, because they can look at group of substantial size as a whole.

3. Decent sample size With 1,300 participants in the sample size, that's pretty good for statistical purposes, to contribute to the body of scientific work focusing on gender and reasoning. John Timmer is absolutely right when he says "abilities of the tribes in Northeast India are only ever going to provide a small snapshot of the full range of human diversity," because that's what science often is, the accretion of studies depositing their findings year after year until (ideally) a decent and more holistic understanding is born.

4. Objective, quantifiable test that does not rely on self-reporting. One of the things I always liked about archaeology was that all my research subjects were dead; much of any error came from us, because a 2.7 C/N ratio was a 2.7 C/N ratio, but only the researcher could decide whether a pelvic bone was remodeled enough to make the subject 45 years old rather than 25. One of the things I disliked about psychology (and some diet studies) was the reliance on human subject self-reporting. The ideal way to cut down on self-reporting or result-expectation errors is double or even triple blind design, where the research assistants don't even know what they're testing for. The test here doesn't have to worry about that; the only variable I can even think of is the incentive given that might influence what kinds of people you end up getting for the test, but I'm willing to gloss over that because of the sample size and the fragmentation of variables.

The researchers accounted for other cultural factors as well, such as education and property ownership, but my first reaction to hearing the research conditions was, "Jackpot!"