Jump to navigation
Jump to search
Burstable billing is a method of measuring bandwidth based on peak use. It also allows usage to exceed a specified threshold for brief periods of time without the financial penalty of purchasing a higher committed information rate (CIR, or commitment) from an Internet service provider (ISP).
Most ISPs use a five-minute sampling and 95% usage when calculating usage.
- 1 95th percentile
- 2 Burstable rate calculation
- 3 Special consideration
- 4 See also
- 5 References
- 6 External links
95th percentile[ edit ]
95th percentile measurement on a regular bandwidth pattern
The 95th percentile is a widely used mathematical calculation to evaluate the regular and sustained use of a network connection. The 95th percentile method more closely reflects the needed capacity of the link in question than tracking by other methods such as mean or maximum rate. The bytes that make up the packets themselves do not actually cost money, but the link and the infrastructure on either end of the link cost money to set up and support. This method of billing is commonly used in peering arrangements between corporate networks, it is not often used by ISPs because Internet service providers need committed information rates (CIRs) for planning purposes.
Since most networks are overprovisioned , there is often some room for some bursting without advanced planning (hence burstable billing). Ignoring the top 5% of the samples is a reasonable compromise in most cases (hence 95th percentile).
Many sites have the majority of their traffic on Mondays, so the Monday traffic determines the rate for the whole month. Some providers offer billing on the 90th percentile as an incentive to attract customers with irregular bandwidth patterns. 
The 95th percentile allows a customer to have a short (less than 36 hours, given a monthly billing period) burst in traffic without overage charges. The 95th percentile says that 95% of the time, the usage is at or below this amount. Conversely, 5% of the samples may be bursting above this rate.
The sampling interval , or how often samples (or data points ) are taken, is an important factor in percentile calculation. A percentile is calculated on some set of data points. Every data point represents the average bandwidth used during the sampling interval (e.g., five minutes) and is calculated as the number of bits transferred throughout the interval divided by the duration of the interval (e.g., 300 seconds). The resulting value represents the average use rate for a single sampling interval and is expressed as bits per second (see data transfer rate ).
Burstable rate calculation[ edit ]
Bandwidth is measured (or sampled ) from the switch or router and recorded in a log file. In most cases, this is done every 5 minutes. At the end of the month, the samples are sorted from highest to lowest, and the top 5% (which equal to approximately 36 hours of a 30-day billing cycle) of data is thrown away. The next highest measurement becomes the billable use for the entire month.
Based on this model, the top 36 hours (top 5% of 720 hours) of peak traffic is not taken into account when billed for an entire month. Bandwidth could be used at a higher rate for up to 72 min a day with no financial penalty. Conversely, if peak traffic only appears for a brief instant and no additional traffic is generated the billing amount can be substantially higher than Average usage billing.
Special consideration[ edit ]
Inbound and outbound traffic is usually counted separately, as connections are full duplex allowing traffic in-bound and out-bound simultaneously.
Some common algorithms are:
- Take the max(in, out) for each interval and use that as the source. This method is more complex to implement as it requires processing of each sample but results are closer to estimating total volume of data sent and received.
- Calculate the 95% value separately for in-bound data and out-bound data and then take the maximum of those two values. This method is simpler to implement but does not correctly estimate symmetric traffic patterns.
- Take the sum(in, out) for each interval. This method is simple to implement and does account for symmetric traffic patters; some ISPs use this method to approximate total volume of data sent and received.
Critics of the 95th percentile billing method usually advocate the use of a flat rate system or using the average throughput rather than the 95th percentile. Both those methods favour heavy users (who have interest in advocating for changes to billing method). Other critics call for billing per byte of data transferred, which is considered most accurate and fair.
See also[ edit ]
- MRTG – Used to review bandwidth usage and with patches , determine 95th percentile values.
- Cacti – another tool for 95th percentile values also based on RRDtool
- LibreNMS – Opensource GPLv3 license which provides 95th percentile based billing.
- Observium – Opensource (QPL) software providing both per-port 95th percentile calculation and multi-port billing tool supporting 95th percentile calculation.
References[ edit ]
- ^ Goldman, Alex. “Cogent’s Latest Price Drop” Archived July 20, 2008, at the Wayback Machine ., ISP PLANET, April 3, 2006. Accessed April 24, 2008.
External links[ edit ]
- PRTG Network Monitor – Many sensors to monitoring all aspects of networks, server (net, disks, memory, process, services…), applications and business. Including reports with 95th calculation.
- MRTG Help Site – Helpful page with example MRTG graphs and explanations.
- Torrus reporting setup guide – Implementation details and installation guide for monthly reports of traffic usage and 95th Percentile in Torrus
- Real Traffic Grabber – RTG gets SNMP statistics and does monitoring. It is an open source and includes graphics and a report generator.
- Ocoloc – Free / open source basic SNMP collection and reporting tool for 95th percentile calculations
- Network performance
- Computer network analysis
- Webarchive template wayback links
- This page was last edited on 8 June 2018, at 15:09 (UTC).
- Text is available under the Creative Commons Attribution-ShareAlike License ;
- About Wikipedia
- Contact Wikipedia
- Cookie statement
- Mobile view
Customers ask us for p99 (99th percentile) of metrics pretty frequently.
We plan to add such a feature to VividCortex (more on that later). But a lot of the time, when customers make this request, they actually have something very specific, and different, in mind. They’re not asking for the 99th percentile of a metric, they’re asking for a metric of 99th percentile. This is very common in systems like Graphite , and it doesn’t achieve what people sometimes think it does. This blog post explains how percentiles might trick you, the degree of the mistake or problem (it depends), and what you can do if percentile metrics aren’t right for you.
Away From Averages
Over the last few years a lot of people have started talking about the problems with averages in monitoring. It’s good that this topic is in wider discussion now, because for a long time averages were accepted without much deeper inspection.
Averages can be unhelpful when it comes to monitoring. If you’re merely looking at averages, you’re potentially missing the outliers, which might matter a lot more. There are two issues with averages in the presence of outliers:
1. Averages hide the outliers, so you can’t see them.
2. Outliers skew averages, so in a system with outliers, the average doesn’t represent typical behavior.
So when you average the metrics from a system with erratic behavior, you get the worst of both worlds: you see neither the typical behavior, nor the unusual behavior. Most systems have tons of outlying data points, by the way.
Looking at the extremes that lie in the “ long tail ” is important because it shows you how bad things can sometimes get, and you’ll miss this if you rely on averages. As Amazon’s Werner Vogels said in an AWS re:Invent keynote, the only thing an average tells you is that half of your customers are having a worse experience. (While this comment is totally correct in spirit, it isn’t exactly right in practice: specifically, the median, or 50th percentile, is the metric that provides this property.)
Optimizely did a write-up in this blog post from a couple years ago. It illustrates beautifully why averages can backfire:
“While the average might be easy to understand it’s also extremely misleading. Why? Because looking at your average response time is like measuring the average temperature of a hospital. What you really care about is a patient’s temperature, and in particular, the patients who need the most help.”
Brendan Gregg also puts it well :
“As a statistic, averages (including the arithmetic mean) have many practical uses. Properly understanding a distribution isn’t one of them.”
And Towards Percentiles
Percentiles (more broadly, quantiles) are often praised as a potential way to bypass this fundamental issue with averages. The idea of the 99th percentile is to take a population of data (say, a collection of measurements from a system) and sort them, then discard the worst 1% and look at the largest value that remains. The resulting value has two important properties:
- It’s the largest value that occurs 99% of the time. If it’s a web page load time, for example, it represents the worst experience that 99% of your visitors have.
- It is robust in the face of truly extreme outliers, which come from all sorts of causes including measurement errors.
Of course, you don’t have to choose exactly 99%. Common alternate choices are 90th, 95th, and 99.9th (or even more nines) percentiles.
At this point, people assume: “averages are bad, and percentiles are great” — let’s calculate percentile metrics and put them into our time series databases, right? Not so fast.
How Time Series Databases Store and Transform Metrics
There’s a big problem with most time series data and percentiles. The issue is that time series databases are almost always storing aggregate metrics over time ranges, not the full population of events that were originally measured. Time series databases then average these metrics over time in a number of ways. Most importantly:
- They average the data whenever you request it at a time resolution that differs from the stored resolution. If you want to render a chart of a metric over a day at 600px wide, each pixel will represent 144 seconds of data. This averaging is implicit and isn’t disclosed to the user. They ought to put a warning on that!
- They average the data when they archive it for long term storage at a lower resolution, which almost all time series databases do.
And therein lies the issue. You’re still dealing with averages in some form. And averaging percentiles doesn’t work, because to compute a percentile you need the original population of events. The math is just broken. An average of a percentile is meaningless. (The consequences vary. I’ll return to that point later.)
A lot of monitoring software encourages the use of percentile metrics that are stored and resampled. StatsD, for example, lets you calculate metrics about a desired percentile, and will then generate metrics with names such as foo.upper_99 and emit those at intervals to be stored in Graphite. All well and good, if the time resolution you want to look at is never resampled, but we know that doesn’t happen.
The confusion over how these calculations work is widespread. Reading through the related comments on this StatsD GitHub issue should illustrate that nicely. Some of these folks are saying things that just ain’t so.
Perhaps the most succinct way to state the problem is this: Percentiles are computed from a population of data, and have to be recalculated every time the population (time interval) changes. Time series databases with traditional metrics don’t have the original population.
Alternative Ways To Compute Percentiles
If a percentile requires the population of original events—such as measurements of every web page load—we have a big problem. A Big Data problem, to be exact. Percentiles are notoriously expensive to compute because of this.
There are lots of ways to compute approximate percentiles that are almost as good as keeping the entire population and querying and sorting it. You can find tons of academic research on a variety of techniques, including:
- Histograms, which partition the population into ranges or bins, and then count how many fall into various ranges.
- Approximate streaming data structures and algorithms (sketches).
- Databases that sample from populations to give fast approximate answers.
- Solutions that are bounded in time, bounded in space, or both.
The gist of most of these solutions is to approximate the distribution of the population in some way. From the distribution, you can compute at least the approximate percentiles, as well as other interesting things. From the Optimizely blog post, again, there’s a nice example of a distribution of response times and the average and 99th percentile:
Source: Catchpoint.com, data from Oct. 15, 2013 to Nov. 25, 2013 for 30KB Optimizely snippet.
There are tons of ways to compute and store approximate distributions, but histograms are popular because of their relative simplicity. Some monitoring solutions actually support histograms. Circonus is one, for example. Circonus’s CEO Theo Schlossnagle often writes about the benefits of histograms.
Ultimately, having the distribution of the original population isn’t just useful for computing a percentile, it’s very revealing in ways that the percentile isn’t. After all, a percentile is a single number that tries to represent a lot of information. I wouldn’t go as far as Theo did when he tweeted that “99th percentile is as bad as an average,” because I agree with percentile fans that it’s more representative of some important characteristics of the underlying population than an average is. But it’s not as representative as histograms, which are much more granular. The chart above from Optimizely contains way more information than any single number could ever show.
Percentiles Done Better in Time Series Databases
A better way to compute percentiles with a time series database is to collect banded metrics. I mention the assumption because there are lots of time series databases that are just ordered, timestamped collections of named values, without the capability of storing histograms.
Banded metrics provide a way to get the same effect as a series of histograms over time. What you’d do is select limits that divide the space of values up into ranges or bands, and then compute and store metrics about each band over time. The metric will be just as it is in histograms: the count of observations that fall into the range.
Choosing the ranges well is a hard problem, in general. Common solutions include logarithmic ranges and ranges that provide a given number of significant digits but may be faster to calculate at the cost of not growing uniformly. Even divisions are rarely a good choice. For more on these topics, please read Brendan Gregg’s excellent writeup .
The fundamental tension is between the amount of data retained and the fineness of the resolution. However, even coarse banding can be effective for showing more than simple averages. For example, Phusion Passenger Union Station shows banded metrics of request latencies using 11 bands. (I don’t think the visualization is the most effective; the y-axis’s meaning is confusing and it’s essentially a 3d chart mapped into 2d in a nonlinear way. Nevertheless, it still shows more detail than an average would reveal.)
How would you do this with popular open source time series tools? You’d have to define ranges and create stacked charts as shown.
To compute a percentile from this would be much more difficult. You’d have to range over the bands in reverse order, from biggest to smallest, summing up as you go. When you reach a sum that’s no more than 1% of the total, that band contains the 99th percentile. There are lots of nuances in this—strict inequalities, how to handle edge cases, what value to use for the percentile (upper or lower bin limit? in the middle? weighted?).
And the math can be confusing. You might think, for example, that you need at least 100 bands to compute the 99th percentile, but it depends. If you have 2 bands and the uppermost band’s value contains 1% of the values, you’ve got your 99th percentile. (If that sounds counterintuitive, take a moment to ponder quantiles; I think a deep understanding of quantiles is worthwhile.)
So this is complicated. It’s possible in the abstract, but it also largely depends on a whether a database’s query language supports the calculations you’d need to get an approximate percentile. If you can confirm systems in which this is definitely possible, please comment and let me know.
The nice thing about banded metrics in a system like Graphite, which treats all of its metrics naively in terms of assuming they can be averaged and resampled at will, is that banded metrics are robust to this type of transformation. You’ll get correct answers because the calculations are commutative over all time ranges.
Beyond Percentiles: Heatmaps
A percentile is still a single number, just like an average. An average shows the center of gravity of a population, if you will; a percentile shows a high-water mark for a given portion of the population. Think of percentiles as wave marks on a beach. But although this reveals the boundaries of the population and not just its central tendency as an average does, it’s still not as revealing or descriptive as a distribution, which shows the shape of the entire population.
Enter heatmaps, which are essentially 3d charts where histograms are turned sideways and stacked together, collected over time, and visualized with the darkness of a color. Again, Circonus provides an excellent example of heatmap visualizations.
On the other hand, as far as I know, Graphite does not have the ability to produce heatmaps with banded metrics. If I’m wrong and it can be done with a clever trick, please let me know.
Heatmaps are great for visualizing the shape and density of latencies, in particular. Another example of heatmap latency is Fastly’s streaming dashboard.
Even some old-fashioned tools that you might think of as primitive can produce heatmaps. Smokeping, for example, uses shading to show the range of values. The bright green is the average:
How Bad Is It To Store Metrics Of Percentiles?
OK, so after all this complexity and nuance, perhaps the good old StatsD upper_99 metrics of percentiles aren’t sounding so bad anymore. After all, they’re pretty simple and efficient, and it’s a turnkey solution. How bad are they, really?
It depends. For lots of purposes they’re absolutely fine. I mean, you still have the limitations that percentiles aren’t very descriptive by themselves, and all that. But if you’re okay with that, then the biggest issue remaining is how they get mangled by resampling, which basically means you’re looking at wrong data.
But all measurements are wrong anyway, and besides, lots of wrong things are useful regardless. For example, I’d say that half of the metrics people use in monitoring systems are already deliberately transformed in ways that mangle them. Load average, for example. It’s very useful, but when you realize how the sausage is made , you might be a little shocked at first. Similarly, lots and lots of widely available systems report partially digested metrics about their performance. A bunch of Cassandra’s metrics are outputs from Coda Hale’s Metrics library, and are time-decayed averages ( exponentially weighted moving averages ), which some people have a huge aversion to.
But back to metrics of percentiles. If you store a p99 metric and then zoom out and view an averaged version over a long time range, although it won’t be “right,” and may be quite different from the actual 99th percentile, the ways in which it is wrong won’t necessarily render it unusable for the desired purpose, i.e. understanding the worst experience most of your users are having with your application. Regardless of their exact values and how wrong they are, percentile metrics tend to a) show outlying behavior and b) get bigger when outlying behavior gets badder. Super useful.
So it depends. If you know how percentiles work and that averaging a percentile is wrong, and you’re okay with it, it might still be useful to store metrics of percentiles. But you are introducing a sort of moral hazard: you might deeply confuse people (perhaps your colleagues) who don’t understand what you’ve done. Just look at the comments on that StatsD issue again; the confusion is palpable.
If you’ll permit me to make a bad analogy, I’ll sometimes eat and drink things in my fridge that I’d never give to someone else. (Just ask my wife.) If you give people a bottle labeled “alcohol” and it contains methanol, some of them will drink it and go blind. Others will ask “what kind of alcohol is in this bottle?” You need to bear that responsibility.
What Does VividCortex Do?
At the moment, our time series database doesn’t support histograms, and we don’t compute and store metrics of percentiles (although you can easily send us custom metrics if you want).
In the future, we plan to store banded metrics at high resolution, i.e. lots of bands. We can do this because most bands will probably be empty, and our time series database handles sparse data efficiently. This will essentially give us histograms once per second (all of our time series data is 1-second granularity). We downsample our data to 1-minute granularity after a configurable retention period, which is 3 days by default. Banded metrics will downsample into 1-minute-granularity histograms without any mathematical curiosities.
And finally, from these banded metrics, we’ll be able to compute any desired percentile, indicate the estimated error of that value, show heat maps, and show distribution shapes.
This won’t be a quick project and will require lots of engineering in many systems, but the foundation is there and we designed the system to eventually support this. No promises on when we’ll get it, but I thought it’d be useful to know where our long-term thinking is.
This was a longer post than I thought it’d be, and I covered a lot of ground.
- If you want to compute percentiles at intervals and then store the results in a time series database — as some extant databases currently do — you might not be getting what you think you are.
- Real percentiles require massive amounts of data processing.
- Approximate percentiles can be computed from histograms, banded metrics, and other useful techniques.
- These datasets also enable distributions and heatmaps, which are much more information-rich than percentiles.
- If this is out of your reach at the moment, go ahead and use metrics of percentiles, but know the consequences.
- Metrics of percentiles tend to a) show outlying behavior and b) get bigger when outlying behavior gets badder, which is useful for lots of purposes.
Hopefully this has been helpful. Also, if you’d like to see some of VividCortex’s approaches and solutions to these problems, it’s easy to start your free trial of VividCortex. Don’t hesistate to get started today.
- Someone commented on Twitter to the effect of, “oh interesting, I’m doing it wrong. I’ll switch to calculating the percent of requests that are over/under a desired latency and store that metric instead.” This doesn’t work either. Averages of fractions (a percent is a fraction) don’t work. Instead, store a metric of the number of requests that didn’t meet your desired latency. That’ll work ok.
- I vaguely remembered but didn’t find Theo’s excellent post on a related topic. Here it is: http://www.circonus.com/problem-math/
Math and Statistics
Subscribe to Email Updates
Posts by Topic
- Monitoring (149)
- Product Features (90)
- MySQL (53)
- Culture (48)
- News, Press and Media (46)
Skip directly to A to Z list
Skip directly to navigation
Skip directly to page options
Skip directly to site content
Get Email Updates
To receive email updates about this page, enter your email address:
Division of Nutrition, Physical Activity, and Obesity
- About Us
- Physical Activity
- Overweight & Obesity
- Healthy Weight
- Micronutrient Malnutrition
- State and Local Programs
About Child & Teen BMI
- Español (Spanish)
On this Page
- What is BMI?
- How is BMI calculated for children and teens?
- What is a BMI percentile and how is it interpreted?
- How is BMI used with children and teens?
- Is BMI interpreted the same way for children and teens as it is for adults?
- Why can’t healthy weight ranges be provided for children and teens?
- What are the BMI trends for children and teens in the United States?
- How can I tell if my child is overweight or obese?
- Can I determine if my child or teen is obese by using an adult BMI calculator?
- My two children have the same BMI values, but one is considered obese and the other is not. Why is that?
- What are the health consequences of obesity during childhood?
What is BMI?
Body Mass Index (BMI) is a person’s weight in kilograms divided by the square of height in meters. For children and teens, BMI is age- and sex-specific and is often referred to as BMI-for-age. In children, a high amount of body fat can lead to weight-related diseases and other health issues and being underweight can also put one at risk for health issues.
A high BMI can be an indicator of high body fatness. BMI does not measure body fat directly, but research has shown that BMI is correlated with more direct measures of body fat, such as skinfold thickness measurements, bioelectrical impedance, densitometry (underwater weighing), dual energy x-ray absorptiometry (DXA) and other methods1,2,3. BMI can be considered an alternative to direct measures of body fat. In general, BMI is an inexpensive and easy-to-perform method of screening for weight categories that may lead to health problems.
Child & Teen BMI Calculator
How is BMI calculated for children and teens?
Calculating BMI using the BMI Percentile Calculator involves the following steps:
- Measure height and weight. Refer to Measuring Children’s Height and Weight Accurately At Home for guidance .
- Use the Child and Teen BMI Calculator to calculate BMI. The BMI number is calculated using standard formulas.
What is a BMI percentile and how is it interpreted?
After BMI is calculated for children and teens, it is expressed as a percentile which can be obtained from either a graph or a percentile calculator (see links below). These percentiles express a child’s BMI relative to children in the U.S. who participated in national surveys that were conducted from 1963-65 to 1988-944. Because weight and height change during growth and development, as does their relation to body fatness, a child’s BMI must be interpreted relative to other children of the same sex and age.
BMI-for-age – Boys Growth Chart [PDF-63 KB]
BMI-for-age – Girls Growth Chart [PDF-49 KB]
BMI calculator for children and teens
The BMI-for-age percentile growth charts are the most commonly used indicator to measure the size and growth patterns of children and teens in the United States. BMI-for-age weight status categories and the corresponding percentiles were based on expert committee recommendations and are shown in the following table.
|Weight Status Category||Percentile Range|
|Underweight||Less than the 5th percentile|
|Normal or Healthy Weight||5th percentile to less than the 85th percentile|
|Overweight||85th to less than the 95th percentile|
|Obese||Equal to or greater than the 95th percentile|
The following is an example of how sample BMI numbers would be interpreted for a 10-year-old boy.
The CDC BMI-for-age growth charts are available at: CDC Growth Charts: United States .
Top of Page
How is BMI used with children and teens?
For children and teens, BMI is not a diagnostic tool and is used to screen for potential weight and health-related issues. For example, a child may have a high BMI for their age and sex, but to determine if excess fat is a problem, a health care provider would need to perform further assessments. These assessments might include skinfold thickness measurements, evaluations of diet, physical activity, family history, and other appropriate health screenings. The American Academy of Pediatrics recommends the use of BMI to screen for overweight and obesity in children beginning at 2 years old. For children under the age of 2 years old, consult the WHO standards .
Top of Page
Is BMI interpreted the same way for children and teens as it is for adults?
BMI is interpreted differently for children and teens even though it is calculated as weight ÷ height2. Because there are changes in weight and height with age, as well as their relation to body fatness, BMI levels among children and teens need to be expressed relative to other children of the same sex and age. These percentiles are calculated from the CDC growth charts, which were based on national survey data collected from 1963-65 to 1988-944.
Obesity is defined as a BMI at or above the 95th percentile for children and teens of the same age and sex. For example, a 10-year-old boy of average height (56 inches) who weighs 102 pounds would have a BMI of 22.9 kg/m2. This would place the boy in the 95th percentile for BMI, and he would be considered to have obesity. This means that the child’s BMI is greater than the BMI of 95% of 10-year-old boys in the reference population.
Access the CDC Growth Charts here: http://www.cdc.gov/growthcharts/clinical_charts.htm
For adults, BMI is interpreted as weight status categories that are not dependent on sex or age. Read more: How to interpret BMI for adult BMI
Top of Page
Why can’t healthy weight ranges be provided for children and teens?
Normal or healthy weight weight status is based on BMI between the 5th and 85th percentile on the CDC growth chart. It is difficult to provide healthy weight ranges for children and teens because the interpretation of BMI depends on weight, height, age, and sex.
Top of Page
What are the BMI trends for children and teens in the United States?
The prevalence of children and teens who measure in the 95th percentile or greater on the CDC growth charts has greatly increased over the past 40 years. Recently, however, this trend has leveled off and has even declined in certain age groups.
To learn more about child and teen obesity trends, visit Childhood Obesity Facts .
Top of Page
How can I tell if my child is overweight or obese?
CDC and the American Academy of Pediatrics (AAP) recommend the use of BMI to screen for overweight and obesity in children and teens age 2 through 19 years. For children under the age of 2 years old, consult the WHO standards . Although BMI is used to screen for overweight and obesity in children and teens, BMI is not a diagnostic tool. To determine whether the child has excess fat, further assessment by a trained health professional would be needed.
For information about the consequences of childhood obesity, its contributing factors and more, see Tips for Parents – Ideas and Tips to Help Prevent Childhood Obesity .
Top of Page
Can I determine if my child or teen is obese by using an adult BMI calculator?
In general, it’s not possible to do this.
The adult calculator provides only the BMI value (weight/height2) and not the BMI percentile that is needed to interpret BMI among children and teens. It is not appropriate to use the BMI categories for adults to interpret the BMI of children and teens.
However, if a child or teen has a BMI of ≥ 30 kg/m2, the child is almost certainly obese. A BMI of 30 kg/m2 is approximately the 95th percentile among 17-year-old girls and 18-year-old boys.
Top of Page
My two children have the same BMI values, but one is considered obese and the other is not. Why is that?
The interpretation of BMI varies by age and sex. So if the children are not the same age and the same sex, the interpretation of BMI has different meanings. For children of different age and sex, the same BMI could represent different BMI percentiles and possibly different weight status categories.
See the following graphic for an example for a 10-year-old boy and a 15-year-old boy who both have a BMI-for-age of 23. (Note that two children of different ages are plotted on the same growth chart to illustrate a point. Normally the measurement for only one child is plotted on a growth chart.)
Top of Page
What are the health consequences of obesity during childhood?
Health risks now
- Childhood obesity can have a harmful effect on the body in a variety of ways.
- High blood pressure and high cholesterol, which are risk factors for cardiovascular disease (CVD). In one study, 70% of obese children had at least one CVD risk factor, and 39% had two or more.6
- Increased risk of impaired glucose tolerance, insulin resistance and type 2 diabetes.7
- Breathing problems, such as sleep apnea, and asthma.8,10
- Joint problems and musculoskeletal discomfort.8,11
- Fatty liver disease, gallstones, and gastro-esophageal reflux (i.e., heartburn). 7,8,9
- Psychological stress such as depression, behavioral problems, and issues in school.12,13,14
- Low self-esteem and low self-reported quality of life.12,14,15,16
- Impaired social, physical, and emotional functioning.12
Health risks later
- Obese children are more likely to become obese adults.17 Adult obesity is associated with a number of serious health conditions including heart disease, diabetes, and some cancers.18
- If children are overweight, obesity in adulthood is likely to be more severe.17
Top of Page
1Garrow, J.S. & Webster, J., 1985. Quetelet’s index (W/H2) as a measure of fatness. Int. J. Obes., 9(2), pp.147–153.
2Freedman, D.S., Horlick, M. & Berenson, G.S., 2013. A comparison of the Slaughter skinfold-thickness equations and BMI in predicting body fatness and cardiovascular disease risk factor levels in children. Am. J. Clin. Nutr., 98(6), pp.1417–24.
3Wohlfahrt-Veje, C. et al., 2014. Body fat throughout childhood in 2647 healthy Danish children: agreement of BMI, waist circumference, skinfolds with dual X-ray absorptiometry. Eur. J. Clin. Nutr., 68(6), pp.664–70.
4Kuczmarski, R.J. et al., 2002. 2000 CDC Growth Charts for the United States: methods and development. Vital Health Stat. 11., 11(246), pp.1–190.
5Ogden CL, Flegal KM, Carroll MD, Johnson CL. Prevalence and trends in overweight among US children and adolescents, 1999-2000. JAMA. 2002;288:1728–32.
6Freedman DS, Mei Z, Srinivasan SR, Berenson GS, Dietz WH. Cardiovascular risk factors and excess adiposity among overweight children and adolescents: the Bogalusa Heart Study. J Pediatr. 2007;150(1):12—17.e2.
7Whitlock EP, Williams SB, Gold R, Smith PR, Shipman SA. Screening and interventions for childhood overweight: a summary of evidence for the US Preventive Services Task Force. Pediatrics. 2005;116(1):e125—144.
8Han JC, Lawlor DA, Kimm SY. Childhood obesity. Lancet. May 15 2010;375(9727):1737—1748.
9Vos MB, McClain CJ. Nutrition and nonalcoholic fatty liver disease in children. Current Gastroenterology Reports. Jun 2008; 10(3): 308-15.
10Sutherland ER. Obesity and asthma. Immunol Allergy Clin North Am. 2008;28(3):589—602, ix.
11Taylor ED, Theim KR, Mirch MC, et al. Orthopedic complications of overweight in children and adolescents. Pediatrics. Jun 2006;117(6):2167—2174.
12Morrison KM., et al. Association of depression & health related quality of life with body composition in children and youth with obesity. Journal of affective disorders 172 (2015): 18-23.
13Mustillo S, et al. Obesity and psychiatric disorder: developmental trajectories. Pediatrics 111.4 (2003): 851-859.
14Halfon N, Larson K, and Slusser W. Associations between obesity and comorbid mental health, developmental, and physical health conditions in a nationally representative sample of US children aged 10 to 17. Academic pediatrics 13.1 (2013): 6-13.
15Schwimmer JB, Burwinkle TM, and Varni JW. Health-related quality of life of severely obese children and adolescents. Jama 289.14 (2003): 1813-1819.
16Taylor VH., et al. The impact of obesity on quality of life. Best Practice & Research Clinical Endocrinology & Metabolism 27.2 (2013): 139-146.
17Cunningham SA, Kramer MR, Venkat Narayan KM. Incidence of childhood obesity in the United States. New England Journal of Medicine 2014; 370 : 403-411.
18Kelsey MM, Zaepfel A, Bjornstad P, Nadeau KJ. Age-related consequences of childhood obesity. Gerontology 2014; 60(3):222-8.
Get Email Updates
To receive email updates about this page, enter your email address:
Division of Nutrition, Physical Activity, and Obesity
- About Us
- Physical Activity
- Overweight & Obesity
- Healthy Weight
- Micronutrient Malnutrition
- State and Local Programs
- Español (Spanish)
File Formats Help:
How do I view different file formats (PDF, DOC, PPT, MPEG) on this site?
- Adobe PDF file
- Microsoft PowerPoint file
- Microsoft Word file
- Microsoft Excel file
- Audio/Video file
- Apple Quicktime file
- RealPlayer file
- Text file
- Zip Archive file
- SAS file
- ePub file
- RIS file
- Page last reviewed: July 3, 2018
- Page last updated: October 24, 2018
- Content source:
- Division of Nutrition, Physical Activity, and Obesity , National Center for Chronic Disease Prevention and Health Promotion