For a convex function, and ,
For a strictly convex function,
Geometrically, the definition of convexity implies that all points on any straight line connecting any two points in a convex set also lie in the set. Strict convexity excludes linear and affine functions, or functions with linear/affine subsets in their boundaries. (How would this extend to non-Euclidean geometries?)
A function is α-strongly convex with respect to a norm ||.|| if, for α > 0
Strongly convexity extends strict convexity. For twice-differentiable functions, this implies that . As Bubeck explains , strongly convex functions speed up convergence of first-order methods. Larger values of α imply larger gradient, and hence step size, when further away from the optimum.
Strong smoothness is another property of certain convex functions:
A function is β-smoothly convex with respect to a norm ||.|| if, for β > 0, 
This definition gives an lower bound on the improvement in one step of (sub)gradient descent :
Alternatively, β-smoothly convex function [lemma 3.3, 1]:
Under certain conditions, a-strong convexity and β-smoothness are dual notions. For now, we’ll state the result without discussion.
If f is a closed and convex function, then f is α-strongly convex function with respect to a norm ||.|| if and only if f* is 1/α-strongly smooth with respect to the dual norm ||.||* (corollary 7 in ).
 S. Bubeck, Theory of Convex Optimization for Machine Learning, section 3.4
 Wikipedia, Convex function
 S. Boyd and G. Vandenberghe, Convex Optimization, section 9.1.2
 S. M. Kakade, S. Shalev-Schwartz, A. Tiwari. On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization. Technical Report, 2009