What you will learn

#1 What early modern human skeletons actually tell us about Paleo diets

#2 Which foods were crucial in supporting the growth of our big energy intensive brains, sparking the Plants vs Animals and Carbs vs Fat debates

#3 Some of the myths about the extent to which humans rely on dietary carbs to fuel high intensity exercise (glycolytic activity)

Macro wars: backing up to Paleo times

How many carbs did humans eat? What sources did they choose?

Beyond mere curiosity, a good reason for asking such questions about our dietary past is that the answers provide us with the proper context for addressing present day medical challenges, like what diabetics should eat, how many carbs they can handle, and which carbs they can best handle.

The question of how many carbs ancestral humans ate and the particular foods that were chosen has been absorbed by a larger mystery, namely why humans brains are so large compared to their body mass and what made this possible in the first place.

On reason the brain questions have engulfed the sub-questions of food is that the human brain is super energy expensive (metabolically active), but our gut (digestive system) is comparatively small.  This might sound like a design flaw at first but the Expensive Tissue Hypothesis resolves this seeming contradiction [1].

Across many species it has been found that by reducing the size of another expensive organ, such as the gut, the cost of a larger brain can be paid for [2]. However, the Why behind our big brains hasn’t only been explored via the lens of food.

One hypothesis proposes that we grew large brains so that we could execute the complex throwing motion imperative for the ‘predation at-distance’ style of hunting characteristic of early humans [3]. Another hypothesis suggests it was the highly coordinated communication necessary for hunting large prey in small bands of hunter-gatherers that selected for our big brains [4].

Whichever hypothesis is favored, food remains a big slice of the answer. That’s what we’re talking about today. We’ll be doing so by exposing and breaking down the many myths conveniently collected in a paper from 2015 by Brand-Miller et al. called The importance of dietary carbohydrate in human evolution [5].

The myths broadly fall into 3 categories

  • Paleoanthropology, the study of anatomically modern humans dating back about 300,000 years [6]
  • genetics
  • metabolism

The Brand-Miller paper argues for the importance of dietary carbohydrate in human evolution, mainly in the form of cooked starches, and in particular with regard to our rapid brain growth (encephalization).

Their argument hinges on 2 key factors overlapping on the timeline such that conditions are met for spurring the rapid growth of a large calorie hungry brain: the advent of controlled fire use and the multiplication of AMY1 gene copies, mostly known for supplying the starch digesting enzyme amylase.

As the idea goes, tubers cooked by fire provide more calories than when uncooked. Fire thus acts a pre-digestion technology turning indigestible carbohydrate into digestible carbohydrate (glucose). Then, the increase in AMY1 gene copies provides a higher concentration of amylase enzymes that extract even more calories from the “pre-digested” starch, all the while ensuring a healthy metabolic response to an increase in dietary carbohydrate intake.



Myth #1

The Brand-Miller paper starts by referencing a study to make the point that ancestral humans must have eaten quite a bit of starch:

“a wider range of isotopic values have been observed in contemporary Middle Pleistocene H. sapiens (Richards and Trinkaus 2009), indicating that considerable differences in the levels of starch consumption existed between these two species”


After comparing the skeletons of Neanderthal and early modern humans, we inferred early modern humans to eat quite a bit more starch than the Neanderthals did.

What’s wrong with it?

The study quoted by the paper says no such thing about starch consumption, nor does the word starch even appear in it. The “wide range” discussed in the paper is about the sources of protein for both of these top-level carnivores, as in how many terrestrial herbivores as compared to marine animals they ate.

Early modern humans seem to have got most of their nutrition in the form of fat and protein from sea creatures, while Neanderthals on the other hand got most if not all of theirs from large terrestrial herbivores. Both species were “top-level carnivores” in terms of their trophic standing, which is at, or just above, the level of hyenas and wolves!

Myth #2 

The Brand-Miller paper rightly cites a 2008 paper by Alperson-Afil to argue that humans started using fire less than 800,000 years ago [7]:

”Gesher Benot Ya’aqov, in Israel, which dates to around 780,000 bp, has charcoal, plant remains, and burned microartifacts in concentrations that the excavators believe suggests evidence for hearths (Alperson-Afil 2008)”


This one cave in Israel seems to have housed humans using hearths for cooking and staying warm.

What’s wrong with it?

It’s not wrong to use that citation to support the timeline claim – in all fairness it does not constitute a myth – but it is made disingenuously. Brand-Miller and colleagues fail to acknowledge stronger evidence for later dates between 300,000 to 400,000 years ago.

First, that site in Israel was most likely washed over by lava following the cataclysmic Matuyama-Brunhes chron boundary event taking place 781,00 years ago [8].

Second, a more recent paper from 2014 places this evidence for an earlier date of human use of fire in its larger context spanning multiple other sites, concluding that a best estimate for the onset of regular fire use in this geographic area is between 357,000 and 324,000 years ago [9].



Myth #3

AMY1 is an enzyme allowing us to digest starch and it increased a lot around 1 million years ago

“it [multiplication of the AMY1 genes] is thought to be less than 1 million years ago (Samuelson et al. 1996; Lazaridis et al. 2014)”


Here the paper makes the case that more copies of this AMY1 gene appeared right around when the authors believe evidence of controlled fire use emerged. The implication being that eating more starch helped get us big brains since more AMY1 preceded the rapid growth of our brains

What’s wrong with it?

Neither the Samuelson study from 1966 [10] and the Lazaridis one from 2014 [11] give a specific date for when the number of AMY1 gene copies increased. Again, these references simply don’t support the point they are being used for.

Furthermore, a study from 2017 [12] contradicts the 1 million years ago figure, saying the increase in gene copy number is likely much more recent:

“a relatively recent origin that may be within the timeframe of modern human origins (i.e., within the last ∼200,000 years)”

Timeline aside, is it even right to assume that having more gene copies (of AMY1) allows one to extract more energy from starch and have a healthier metabolic response to it?

Many studying this question have answered yes [13, 14, 15]. However, a higher quality paper from 2015 in Nature Genetics fails to support the idea that having more copies of AMY1 enables a healthier metabolic response, as those people are as likely to get obesity or diabetes [16]. Past results contradicted this, seemingly because lower quality molecular methods were used in those experiments compared to those employed in the Nature Genetics study.



 Myth #4


The Brand-Miller paper makes the, frankly ludicrous, following statement:

“There is debate on whether dietary carbohydrates are actually essential for human nutrition”


There is evidence suggesting that if one does not eat a certain amount of dietary carbs there’s significant risk of suffering some sort of carbohydrate deficiency syndrome – not unlike what may be seen with a life-threatening B12 or Vitamin C deficiency.

What’s wrong with it?

It’s entirely false. There is no debate around the question. There is 0 evidence to suggest one will suffer from a dietary carbohydrate deficiency.

In 1968 Owen and Cahill showed that the human brain always requires at least ~ 35% of its energy from glucose but this does not have to come from dietary carbs [17].

Humans are perfectly capable of making as much glucose as they need by using protein and fat (see our post on gluconeogenesis for more). Indeed, a 1999 report by IDECG Working Group recognizes this, saying that “the theoretical minimal level of carbohydrate (CHO) intake is zero” [18].

This has been verified experimentally as far back as 1972 [19]. Dr.Eric Westman summarizes his findings about carbohydrates being non-essential, saying that “although there is certainly no evidence from which to conclude that extreme restriction of dietary carbohydrate is harmless, I was surprised to find that there is similarly little evidence to conclude that extreme restriction of carbohydrate is harmful” [20].

When dietary carbs are avoided entirely, the human brain will use ketone bodies as its primary energy substrate. In such conditions, we observe “an increase in the metabolic efficiency in human brains using ketoacids as their principal energy source in place of glucose” [21].

This state of ketosis shouldn’t be confused with DKA (diabetic ketoacidosis) that an uncontrolled diabetic may experience, where both blood glucose and blood ketones (beta-hydroxybutyrate) are life-threateningly high.

Myth #5

The Brand-Miller paper argues for ⅓ of calories coming from dietary carbohydrates, saying:

“A daily carbohydrate intake of about 50–100 g is considered essential to prevent ketosis in adults (Institute of Medicine 2006), and is consistent with a more realistic recommendation for the practical minimal requirement of 150 g/day of glycemic carbohydrate intake beyond the ages of 3 to 4 years (Bier et al. 1999)”


⅓ of a person’s daily calories should come from carbohydrates, amounting to about 50 – 100 g, to avoid the metabolic state of ketosis. The implication is that this state is to be avoided because it’s unhealthy.

What’s wrong with it?

The metabolic state whereby one is ‘in ketosis’ is as normal as is being ‘out of ketosis’ . This is true whether ketosis is achieved through fasting (not starvation) or by eating a ketogenic diet (very low in carbohydrates but where protein and fat are eaten to satiety).

Not only is ketosis not an inherently harmful state, many of its benefits are being revealed by a resurgence of scientific research and people’s self-reports after adopting a ketogenic diet themselves [22]. Studies use a variety of techniques to probe the pros and cons of ketosis, whether that be dietary manipulation, fasting protocols or pharmacological means to raise the concentration of ketones in the blood. For example, ketone bodies have potent anti-inflammatory effects.

Myth #6

The Brand-Miller paper then makes a claim about exercise capacity to support the need for including significant amounts of dietary carbs, saying,

“Glucose is the only energy source for sustaining running speeds above 70% of maximal oxygen consumption (Romijn et al. 1993)”


If you don’t eat a high(ish) carbohydrate diet, you will not be able to run or exercise intensely.

What’s wrong with it?

Glucose is not the only energy source at high exercise intensities, it’s the main one. First, the Brand-Miller paper conflates the need for carbs (glucose) at the cellular level to sustain high intensity efforts with carbs coming from the diet. Second, the Brand-Miller paper presents fuel use at varying intensities as a binary change, when in fact the amount of energy each fuel source provides (glucose and fats) lies on a continuum [23].

This non-binary use of multiple fuels is exemplified in the lab measures done on elite ultra-runner and FASTER study participant Zach Bitter [24]

  • at 75% VO2 max he used 98% fat and 2% carbohydrate
  • at 84% VO2 max he used 76% fat and 24% carbohydrate
  • finally at 96% VO2 max he was still using 23% fat and 77% carbohydrate

A more objective point about dietary carb and exercise intensities would recognize that it’s still an open question as to how many dietary carbs one needs (or doesn’t need) to sustain medium-term intense (glycolytic) efforts. We should not conflate having to use glucose at the cellular level with a need to eat glucose.

Furthermore, the conventional high-carb ‘best fueling’ strategies for very intense efforts are being rethought given new data. For instance, a study from 2015 found that well-trained runners performing high intensity efforts at 85% VO2 max used fat to fuel nearly ⅓ of their energy output [25].

‘Low-carb’ researchers Noakes, Volek and Phinney bring home a similar point, alluding to the fact that [26]

“some highly adapted runners consuming less than 10% of energy from carbohydrate are able to oxidise fat at greater than 1.5 g/min during progressive intensity exercise and consistently sustain rates of fat oxidation exceeding 1.2 g/min during exercise at ∼65% VO2max”

These figure overturn now-outdated exercise physiology textbooks claiming that the fat oxidation ceiling lies at 1 g/min. Confirmed in 2015 in Volek’s FASTER study, low-carb athletes were found to be oxidizing 2.3 times more fat (1.54 ± 0.18) on average than the high-carb group (0.67 ± 0.14 g/min) and they were doing so at a higher percentage of VO2max (70.3 ± 6.3 vs 54.9 ± 7.8%) [27].

These newly emerging studies squarely contradict the Brand-Miller’s paper confused claim regarding exercise intensity and how to fuel for it.

Myth #7

The Brand-Miller paper then argues that glycogen stores are crucial for “sustained fasting or hardship”, saying,

“In an evolutionary context, large stores of glycogen must be generated in order to provide sources of glucose for periods of sustained fasting or hardship. To build these reserves, the diet must consistently provide energy surplus to basal metabolic requirements”


Without cooked tubers as a dietary staple, we’d never have survived multi-day fasts or “hardships”.

What’s wrong with it?

The incredible ability humans have to fast for weeks at a time does not hinge on our tiny glycogen stores. They hold about 15 g of glycogen per kg of body weight, or a paltry 3,000 kcals if we take the average 65 kg man as an example [28].

Fasting actually hinges on our ability to make our own glucose through gluconeogenesis and our unusually large amounts of stored body fat, mostly used to feed our power hungry brain.

Gluconeogenesis ensures that the glucose-dependent cells keep working even when calories or carbs aren’t available. These cells are found in our nervous system including the brain, immune cells, Sertoli cells of the testes and elsewhere.

Our stored body fat (adipose tissue) can supply the calories and nutrients for when food or carbs aren’t available. A 65 kg man at 13% body fat can store about 75,000 kcals of fat on his body and he probably eats 2,300 kcals a day.



#1 We are omnivorous, obligate carnivores. This means that we need to eat meat, but we often eat much more than just meat

#2 There is no such thing as a dietary glucose deficiency because carbs are not an essential macronutrient thanks to gluconeogenesis

#3 It’s still an open question whether or not high-intensity (glycolytic) efforts are best fuelled by our own internally generated glucose or from dietary glucose

#4 Ketosis is a normal metabolic state to be in and shouldn’t be confused with the pathological state some uncontrolled diabetics experience called DKA (diabetic ketoacidosis)


[Total: 20    Average: 4.2/5]