What is “Science” Anyway?

Ever wonder why what people claim that “science says” can be completely contradictory? Science is a tool that, when carefully wielded, improves our understanding of the world; however, “science” is often personified as an authority figure and scientific evidence is often misused and misinterpreted. The confusion seems to stem from a false expectation regarding the purpose of the scientific method and the power of its data. Follow along as we walk through the scientific method, different types of scientific evidence, and strategies for identifying credible sources of scientific information. This article serves as a primer to improve your scientific literacy and equip you to navigate a complicated world full of conflicting scientific claims.

What is “science”?

Science is often spoken about like an irrefutable authority, but in reality, it is a systematic process by which we gather evidence to better understand the world. An entire body of scientific literature, let alone a single study, does not provide a comprehensive understanding of the universe; rather, it is a means by which we reduce our uncertainty. This process is accomplished using the scientific method, which has several steps. You may see these steps vary in number or verbiage, but the premise remains unchanged. The steps are:

  1. Make an observation about the world or previous scientific studies.

  2. Ask a question about the observation.

  3. Formulate a hypothesis. A good hypothesis is specific, consistent with current data, and falsifiable (i.e. it can be proven wrong).

  4. Conduct an experiment designed to test the hypothesis, controlling for as many other variables as possible.

  5. Analyze the data.

  6. Draw a conclusion.

  7. Communicate the results.

A single iteration of this process does not provide definitive knowledge; a single study can’t truly claim to “prove” anything. It provides evidence. In order to become more certain of a hypothesis, it is tested in multiple ways and these experiments are repeated. Due to the incremental, iterative nature of this process, knowledge is hard-won and credible reporting of research and scientific data often carries a qualified and uncertain tone. Data from a single study suggest, they don’t prove.

Types of scientific studies

In step four of the scientific method as outline above, experiments are conducted to test hypotheses. The design of a study and the manner in which it is executed significantly effects the credibility of the findings. Not all study designs are created equal. There are some major categories of scientific studies, and they all have strengths and weaknesses. We will walk through a handful of common study designs to learn the role that each plays in improving scientific understanding, specifically in the realm of human nutrition.

In vitro studies

In vitro (Latin for “within the glass”) studies are those conducted outside of a living organism. They are also often referred to as test tube studies, as they are frequently conducted in a test tube, or perhaps a Petri dish. They are beneficial because they allow us to observe and understand mechanisms in a way other study designs do not. However, the applicability of the findings are limited (after all, our bodies are far more complex than a test tube and nothing occurs in isolation) unless corroborated by other types of evidence. Such studies also aid in the development and refinement of future hypotheses; they help us ask more informed questions for future studies.

To help us understand this better, let’s look at a real life topic with which many are familiar: seed oils. A discussion of seed oils cannot be adequately covered within the scope of this article; however, we will use the topic to provide some tangible examples of study design.

One in vitro study examined the toxicity of linoleic acid (the primary fatty acid of concern in seed oils) compared with oleic acid (the primary fatty acid in olive oil). Specifically, it looked at the impacts of these fatty acids on human lymphocytes, a type of immune cell. The study found that the linoleic acid indicated greater toxicity than oleic acid. Although this may prompt further investigation into the health and safety of linoleic acid, we cannot by results like this alone make accurate statements about the impacts of oil on human health.

Animal model studies

Animal model studies are a form of in vivo (“in the living”) study. They allow researchers to test a hypothesis in an entire living organism, rather than just one portion of an organism (like the immune cell from the in vitro study) in isolation. This is an improvement, however, rats, mice, and other lab animal analogs are not human. We eat differently and our physiologies are different, even if we share some biological traits. In vivo animal studies are a great way to gather additional data in a way that is easier to control, more affordable, and has fewer ethical implications than an equivalent human trial. However, data from these studies are still insufficient to provide informed recommendations on human nutrition.

Back to our seed oil example. One animal study examined the impact of linoleic acid on a specific cellular signaling pathway in rats with a specific subtype of breast cancer. They found an association between linoleic acid intake and the activation of this cancer-promoting cellular process in the rats. The purpose of this study was to gain an understanding of a cellular mechanism that could inform dietary interventions for patients with this aggressive subtype of breast cancer. This study alone cannot say anything definitive, but it could inform further research. Additionally, it would be a mistake (not one made by the researchers, but one typically made by media coverage of a study like this) to say that linoleic acid, omega-6 fats, or more broadly, seed oils, cause cancer in humans; however, that is exactly the type of headlines that you see generated from animal studies in popular media. In order to corroborate a claim like “seed oils cause cancer in humans”, we need some human data! We’ll look at some classes of human research data next.

Observational studies

A very common human subjects study design in nutrition research is observational. It is just what it sounds like: researchers observe a free-living population of individuals and collect data. Typically, in nutrition studies, information is collected regarding dietary intake, and then health outcome data is observed. There are different subtypes of research under the umbrella of observational studies. Two common types are cross-sectional and prospective. With a cross-sectional study, researchers look at a group of individuals, referred to as a “cohort”, at one snapshot in time. In contrast, a prospective observational study follows a cohort over time. Prospective studies typically give us higher quality data. In nutrition research, we usually care about how dietary patterns affect health outcomes in the long term, and this type of study allows us to observe such changes in a population.

Many observational studies rely on self-reported dietary intake data alone, however, sometimes biomarker data (actually measuring levels of a component in the body) can be leveraged to create even more compelling data. One such study looked at the linoleic acid content of stored fat tissue in study participants. Because the body cannot make linoleic acid (it all comes from the diet), this is a good proxy measurement of linoleic acid intake. Over 4600 participants were followed for a median of 21 years, and researchers analyzed the relationship between stored linoleic acid and all-cause mortality (i.e. death from any cause). They found that individuals with higher linoleic acid stores had a lower risk of dying during the study follow-up period.

Randomized Controlled Trials

One significant limitation in observational research studies is that researchers have limited control with no ability to dictate an intervention or randomly assign individuals to groups. This makes observational studies easier and less expensive to conduct in large groups over long periods of time, but it does create some limitations in interpreting the data. The gold-standard study design addresses both of these limitations. Randomized controlled trials (RCTs) are interventional; researchers can truly test how changing one aspect of behavior affects a specific outcome variable. In this type of study, participants are recruited and then randomized to either an intervention or a control group. Then, an outcome variable is assessed over a specific period of time. When we compare the outcome variable between the two groups over the study duration, we get a more accurate picture of the effect of the intervention.

Another example: A study of 67 adults with abdominal obesity were randomly assigned to eat a diet high in saturated fat (mainly from butter) or a diet high in omega-6 polyunsaturated fat (like those found in seed oils). The diets had the same caloric content and macronutrient profile (see how they controlled other variables?). Participants retained those diets for 10 weeks. Before and after the intervention, researchers measured participants liver fat content and biomarkers of metabolic health and inflammation. The study found that the group eating the omega-6 rich diet had a reduction in liver fat and modestly improved metabolic health with no increase in inflammation when compared to the group eating the diet higher in saturated fat.

Meta-Analyses

Although RCTs are the gold standard, there is something even better. Perhaps you noticed that the number of participants in the observational study example (>4600) was much larger than in the RCT sample (67). These are just two examples but it is a common trend: it is more difficult, more expensive, and less feasible to have a large number of participants in an RCT, and this often limits our ability to observe meaningful changes among groups. Additionally, one study alone cannot “prove” anything; evidence needs to be corroborated over multiple trials. Meta-analyses address both these challenges by pooling together data from multiple trials and doing a combined analysis. A meta-analysis can combine observational data or RCT data. These studies answer the question “what does all the data on this subject say as a whole?” Therefore, it is the best data from which to draw conclusions and make informed recommendations regarding human health.

A meta-analysis of RCTs examining the effect of replacing saturated fat with polyunsaturated fat (i.e. less butter and more seed oil) on coronary heart disease events. Researchers looked for all available studies that have examined this relationship with similar types of data. Pooling all the data together into one combined analysis, the study found that eating polyunsaturated fat in the place of saturated fat reduced the risk of a coronary heart disease event by 20%.

How do you interpret contradicting data?

Looking back just at the example studies provided, you can see how news article headlines and Instagram reels could convey vastly different messages regarding what “science says” about something like seed oils. It could be considered “true” to say that high linoleic acid seed oils promote cancer growth; however, although it is true, it is vastly misleading and a gross overgeneralization. Most people want to know whether seed oil consumption at normal dietary levels are likely to increase their risk of developing cancer, and that is too large of a leap to make from the animal study cited above. So how are we to navigate a complicated social and popular media landscape of information?

One way to evaluate a scientific claim is to look at the type of study cited, and whether other data aligns with the findings. The types of studies presented above were laid out in order of increasing quality. This concept is referred to as the hierarchy of evidence; although all types of evidence provide value, not all types of evidence are equal. Higher quality evidence is better suited for making causal claims (i.e., if I do this, this happens as a result) and public health recommendations. So, if I come across an in vitro study and a randomized controlled trial that make contradictory claims, it would be more prudent to base decisions on the RCT findings.

A second way to determine what you should believe and act upon when you encounter a “science says” claim, is to evaluate the individual conveying the information. When you’ve identified a trustworthy source, you can rely more on trust and less on digging into scientific literature yourself. The next section lays out some “red flags” and “green flags” when evaluating a mouthpiece of “science.”

How do you know who you can trust?

First, we will look at how to identify to whom you should not listen.

The RED flags:

  • Logical inconsistency. When in one video someone disparages a scientific study because it was an observational study, but then cites observational studies when the findings support their ideology, this is called being logically inconsistent. Observational studies always bring strengths and weaknesses to the table. These should always be addressed, regardless of whether the findings support your belief.

  • Talks in absolutes. The more black and white someone makes a topic seem, the less likely they are addressing the nuance required to accurately represent scientific data. Remember, science rarely “proves” things; more typically, scientific studies “demonstrate” or “support” a hypothesis. The elusive “proof” is hard earned over many studies and many years. Don’t confuse certainty with credibility. They rarely coexist!

  • Flashes a study on the screen without explanation. Social media is riddled with this: a person makes a claim, and a study headline pops up on the screen as if to say, “I speak for science”. This technique can and often is utilized while grossly misrepresenting the science. Notably, this is often done without malintent. Fairly evaluating scientific data is learned skillset.

  • Never changes their position. The scientific community is always discovering new things, and there are so many gaps in our knowledge, especially surrounding nutrition. The natural result is that as new data emerges, true scientists adjust their stance. If someone never budges their beliefs or recommendations in the face of new evidence, be skeptical of their scientific prowess.

  • Relies on narrative data. If, when confronted with scientific data that counters their belief, an individual cites their own experience or the experience of someone else as their only refutation, they are not relying on the hierarchy of scientific evidence. Narrative data is real, can be helpful, and is often uniquely compelling or inspiring, but it is very low on the hierarchy and is never sufficient to make generalized recommendations.

You’ve probably seen that full bag of tricks highlighted on your Instagram feed, among your friends, or even in news media. So is there anyone you can trust? Yes, there is. Here are some tricks to spotting them.

The GREEN flags:

  • Explains research studies, including their limitations. Someone who explains the type of study they are citing, how the study was conducted, what the findings were, and most critically what the limitations were is often a credible source. You can tell when someone is taking time to explain the science and how to interpret it. Every study has limitations, and when someone addresses a study’s limitations (even when the findings supported their currently held beliefs), you can usually trust that they are fairly evaluating the data.

  • Changes their mind. This is probably the hardest one to wrap one’s mind around. Often, changing ones mind is interpreted as an admission of error. But engaging in science requires a level of humility. Recommendations are always based on best available evidence, and we are always getting new evidence. It’s how the scientific process works! If someone is unwilling to change their opinion, they are likely just cherry picking evidence to support their preexisting ideology.

  • Talks with nuance. When you hear two individuals debating a topic (this is quite common in the world of popular nutrition), you may hear one person speaking very definitively and another person sounding less certain. Human nature often compels us to trust the person who sounds certain, however this will likely lead you to trust the wrong person in a scientific debate. The all powerful “science” is not so certain. An individual using statements like “data suggest…”, “evidence supports…”, or “under x conditions, y occurred” is more likely to be fairly representing the data than someone saying “new study proves…”. What sounds like uncertainty actually reflects an understanding the scientific process.


It is a confusing world out there, chock-full of contradicting scientific claims. A little scientific literacy goes a long way in making you a more skeptical, discerning consumer of science. Using the information and tools provided above, go forth and see what the “science says” for yourself with less frustration and more awe!

Next
Next

How “Whole” are Your Grains?