There’s been a lot of buzz in the food and nutrition industry this month over a newly released food scoring system developed by researchers at Tufts University, called the Food Compass.
We’ve seen scoring systems like this come and go in the past. Perhaps you remember seeing star-ratings or stop-light colors attached to shelves in grocery stores. The goal of these systems is to give consumers a quick and easy way to assess the healthfulness of foods.
Of course, simplicity always comes with a cost. Food is complex stuff and the ways in which it affects our bodies are even more complex. Trying to reduce these impacts to a simple score is always going to end up being, well, reductionist. In any event, I’m not sure that these scoring systems have ever made a lasting impact on buying behavior.
How is the Food Compass Score calculated?
According to the developers, the Food Compass Score improves on these past systems in several important ways. It incorporates a broader range of food principles, applying consistent criteria across multiple food categories, and—unlike some other systems—the methodology used to produce the ratings is completely transparent.
The algorithm assesses foods in 9 different areas—including the nutrient content, nutrient ratios (such as the ratio of fiber to carbohydrate or potassium to sodium), the degree of processing, quality of ingredients, and additives—and produces a score between 1 (least healthy) and 100 (most healthy).
How to use the Food Compass
Foods that are ranked 70 and higher are encouraged. Foods with a score between 30 and 70 are OK if eaten in moderation. Foods scoring lower than 30 are supposed to minimized.
They rated some 8,000 foods, including everything from fresh eggs to ready-to-eat breakfast cereals to snacks to fully prepared frozen meals.
Critics have been quick to point out ratings that would seem to throw the validity of this system into question. For example, a nonfat cappuccino rates a 73 (encouraged!) while a grilled skinless chicken breast rates only a 61 (don’t eat too often!).
Watermelon scores a perfect 100 while cheddar cheese is a to-be-minimized 28. Cheerios are a 95 but corn flakes rate only 19. Despite the fact that the degree of processing is heavily weighted in the system, sweet potato chips and cooked whole-grain bulgur both have the same rating (69).
Looking a little deeper into the algorithm, there is a lot to quibble about.
For example, points are deducted if a food contains cholesterol, despite the fact that dietary cholesterol has been largely cleared of suspicion in terms of its effects on heart health and is no longer considered a “nutrient of concern.”
There’s also no distinction between nutrients that occur naturally in foods and those that are added through fortification. This often skews ratings in favor of processed foods, which are more likely to be fortified. This is why, for example, Cheerios ranks a near-perfect 95 whereas plain rolled oats only scores a 78.
But I think the biggest problem is in the category of “food-based ingredients.”
What makes a food good?
A food gets a higher score if it contains fruits, non-starchy vegetables, legumes, whole grains, nuts, yogurt, plant oils, or seafood. It is penalized if it contains refined grains or red or processed meat.
That list, in and of itself, is not a major problem—with the possible exception of plant (vegetable) oils. These are presumably favored because they contain polyunsaturated fats. Although the healthfulness of PUFAs has been debated, replacing saturated fats with PUFAs has been linked to better health outcomes. But a higher ratio of PUFAs to saturated fats is rewarded separately in the nutrient ratio category. Furthermore, rewarding vegetable oils (twice) will tend to raise the score of foods like baked goods, chips, crackers, and bottled salad dressings, which are among the primary sources of PUFAs in the American diet.
But my bigger beef (if you will) with this category is that it is almost entirely redundant.
Fruits and non-starchy vegetables are healthful because they are high in vitamins, minerals, fiber, and phytonutrients. They have a high potassium to sodium ratio. But all of these factors—along with the degree of processing—are accounted for elsewhere in the algorithm. If you want to take a food-based approach, then give points to fruits and vegetables. If you want to take a nutrient-based approach, give points for the nutrients that fruits and vegetables provide. Doing both seems duplicative.
Seafood is another positive food-based ingredient. However, the nutrients that make seafood healthful (protein, omega-3 fats, zinc, selenium) are already accounted for in other categories.
Similarly, in the category of additives, foods lose points for containing added sugars and then lose points again for containing high fructose corn syrup—as if to suggest that HFCS is twice as harmful as other added sugars. Despite a lot of sound and fury and rat studies, I remain unconvinced by arguments that HFCS is the villain it’s made out to be. If all the HFCS in the food system were magically replaced by honey, and we continued to consume it in the same amounts, I think we’d have exactly the same issues.
It’s also hard to figure why certain foods but not others were included in this very short list of food-based ingredients. Yogurt (but not milk) is considered a plus in the food-based ingredient category. But in a separate category, foods are also given points for being fermented. That seems not only redundant but sort of arbitrary. (Although funders had no role in study design, data collection, data analysis or interpretation, or drafting of the manuscript, global yogurt giant Danone is listed as one of two sponsors.)
Pardon me, your bias is showing
I wasn’t in the room when these decisions were being debated and I know these researchers are well-intentioned and highly regarded. But the choices about which foods to reward and penalize seem to reflect current beliefs and biases about “good” and “bad” foods. And these largely subjective (and redundant) judgments are given quite a bit of weight in the formula.
The high profile of some of these researchers presents another dilemma. Many of them are well-known and vocal advocates for a plant-based, Mediterranean-style diet. And, in the spirit of full disclosure, that is my personal preference as well. But to my eye, the algorithm that they designed appears to conform to that bias. There are a couple of thumbs on the scale here, and the results end up being more ideological than I think a nutrient profiling system should be.
But once the initial hubbub dies down, how much will anyone really care about the Food Compass?
If it catches on with consumers (a big “if”), manufacturers will inevitably figure out how to tweak their products to improve their scores—without meaningfully improving the nutrition.
I was comparing notes on the Food Compass with my colleague, registered dietitian Linn Steward, and we agreed that nutrient profiling systems seem to be of limited value in helping educate and inspire customers. They might be a slight improvement over current front-of-packaging labeling. But their potential usefulness is limited to comparing different items in the same category (which canned soup or loaf of bread should I buy?) rather than as a way to determine whether to have eggs or oatmeal for breakfast.
For one thing, foods don’t all play the same role in a healthful diet. A food-by-food scoring system also can’t account for foods eaten in combination. Three foods with low individual ratings could make a very healthful meal when combined.
And whenever you’re trying to crunch a lot of disparate dimensions (nutrient content, degree of processing, additives, and so on) into a single unified equation, a lot is going to get lost in translation.