I’ve just returned to the UK after an excellent stay at the University of British Columbia. More about that will follow in some posts which are being queued. Anyway, I flew back in time to attend the last day of the camp held at Oundle School to select the UK team for this year’s International Mathematical Olympiad, to be held in Cape Town in early July. I chose to give a short session on inequalities, which is a topic I did not enjoy as a student and do not enjoy now, but perhaps that makes it a particularly suitable choice?
We began with a discussion of convexity. Extremely occasionally in olympiads, and merely slightly occasionally in real life, an inequality arises which can be proved by showing that a given function is convex in all its arguments, hence its maximum must be attained at a boundary value in each variable.
In general though, our main experience of convexity will be through the medium of Jensen’s inequality. A worthwhile check is to consider one form of the statement of Jensen’s inequality, with two arguments. We are always given a convex function f defined on an interval I=[a,b], and , and weights
which sum to 1. Then
How do we prove this? Well, in fact this is the natural definition of convexity for a function. There had initially been vague murmurings that convexity should be defined as a property of the second derivative of the function. But this is somewhat unsatisfactory, as the function is certainly convex, but the second derivative does not exist at x=0. One could argue that the second derivative may not be finite at x=0, but is nonetheless positive by defining it as a limit which happens to be infinite in this case. However, I feel it is uncontroversial to take the case of Jensen given above as the definition of convexity. It is after all a geometric property, so why raise objections to a geometric definition?
The general statement of Jensen’s inequality, with the natural definitions, is
This is sometimes called Weighted Jensen in the olympiad community, with ‘ordinary’ Jensen following when the weights are all 1/n. In a probabilistic context, we write
for X any random variable supported on the domain of f. Naturally, X can be continuous as well as discrete, giving an integral version of the discretely weighted statement.
Comparing ‘ordinary’ Jensen and ‘weighted’ Jensen, we see an example of the situation where the more general result is easier to prove. As is often the case in these situations, this arises because the more general conditions allow more ‘elbow room’ to perform an inductive argument. A stronger statement means that assuming the induction hypothesis is more useful! Anyway, I won’t digress too far onto the proof of discrete ‘weighted’ Jensen as it is a worthwhile exercise for olympiad students.
What I wanted to discuss principally was an inequality due to Tiberiu Popoviciu:
We might offer the following highly vague intuition. Jensen asserts that for sums of the form , you get larger sums if the points are more spread out. The effect of taking the mean is immediately to bring all the points as close together as possible. But Popoviciu says that this effect is so pronounced that even with only half the weight on the outer points (and the rest as close together as possible), it still dominates a system with the points twice as close together.
So how to prove it? I mentioned that there is, unsurprisingly, a weighted version of this result, which was supposed to act as a hint to avoid getting too hung up about midpoints. One can draw nice diagrams with a triangle of points (x,f(x)), (y,f(y)), (z,f(z)) and draw midpoints, medians and centroids, but the consensus seemed to be that this didn’t help much.
I had tried breaking up the LHS into three symmetric portions, and using weighted Jensen to obtain terms on the RHS, but this also didn’t yield much, so I warned the students against this approach unless they had a specific reason to suppose it might succeed.
Fortunately, several of the students decided to ignore this advice, and though most fell into a similar problem I had experienced, Joe found that by actively avoiding symmetry, a decomposition into two cases of Jensen could be made. First we assume WLOG that , and so by standard Jensen, we have
It remains to show
If we multiply by ¾, then we have an expression on each side that looks like the LHS of Weighted Jensen. At this point, it is worth getting geometric again. One way to visualise Jensen is that for a convex function, a chord between two points on the function lies above the function. (For standard Jensen with two variables, in particular the midpoint lies above the function.) But indeed, suppose we have values , then the chord between
lies strictly above the chord between
. Making precisely such a comparison gives the result required above. If you want to be more formal about it, you could consider replacing the values of f between
with a straight line, then applying Jensen to this function. Linearity allows us to move the weighting in and out of the brackets on the right hand side, whenever the mean lies in this straight line interval.
Hopefully the diagram above helps. Note that we can compare the heights of the blue points (with the same abscissa), but obviously not the red points!
In any case, I was sceptical about whether this method would work for the weighted version of Popoviciu’s inequality
It turns out though, that it works absolutely fine. I would be interested to see a solution to the original statement making use of the medians and centroid, as then by considering general Cevians the more general inequality might follow.
That’s all great, but my main aim had been to introduce one trick which somewhat trivialises the problem. Note that in the original statement of Popoviciu, we have a convex function, but we only evaluate it at seven points. So for given x,y,z, it makes no difference if we replace the function f with a piece-wise linear function going through the correct seven points. This means that if we can prove that the inequality for any convex piece-wise linear function with at most eight linear parts then we are done.
(There’s a subtlety here. Note that we will prove the inequality for all such functions and all x,y,z, but we will only use this result when x,y,z and their means are the points where the function changes gradient.)
So we consider the function
for some positive value of a. It is not much effort to check that this satisfies Popoviciu. It is also easy to check that the constant function, and the linear function g(x)=bx also satisfy the inequality. We now prove that we can write the piece-wise linear function as a sum of functions which satisfy the inequality, and hence the piece-wise linear function satisfies the inequality.
Suppose we have a convex piecewise linear function h(x) where are the points where the derivative changes. We write
for the change in gradient of h around point . Crucially, because h is convex, we have
. Then we can write h as
for a suitable choice of the constant C. This result comes according to [1] as an intermediate step in a short paper of Hardy, Littlewood and Polya, which I can’t currently find online. Note that inequalities are preserved under addition (but not under subtraction) so it follows that h satisfies Popoviciu, and so the original function f satisfies it too for the values of x,y,z chosen. These were arbitrary (but were used to construct h), and hence f satisfies the inequality for all x,y,z.
Some further generalisations can be found in [1]. With more variables, there are more interesting combinatorial aspects that must be checked, regarding the order of the various weighted means.
[1] – D. Grinberg – Generalizations of Popoviciu’s Inequality. arXiv