# l0-Norm, l1-Norm, l2-Norm, … , l-infinity Norm

I’m working on things related to norm a lot lately and it is time to talk about it. In this post we are going to discuss about a whole family of norm.

What is a norm?

Mathematically a norm is a total size or length of all vectors in a vector space  or matrices. For simplicity, we can say that the higher the norm is, the bigger the (value in) matrix or vector is. Norm may come in many forms and many names, including these popular name: Euclidean distance, Mean-squared Error, etc.

Most of the time you will see the norm appears in a equation like this:

$\left \| x \right \|$ where $x$ can be a vector or a matrix.

For example, a Euclidean norm of a vector $a = \begin{bmatrix} 3 \\ -2 \\ 1 \end{bmatrix}$ is $\left \| a \right \|_2=\sqrt{3^2+(-2)^2+1^2}=3.742$ which is the size of vector $a$

The above example shows how to compute a Euclidean norm, or formally called an $l_2$-norm. There are many other types of norm that beyond our explanation here, actually for every single real number, there is a norm correspond to it (Notice the emphasised word real number, that means it not limited to only integer.)

Formally the $l_p$-norm of $x$ is defined as:

$\left \| x \right \|_p = \sqrt[p]{\sum_{i}\left | x_i \right |^p}$  where $p \epsilon \mathbb{R}$

That’s it! A p-th-root of a summation of all elements to the p-th power is what we call a norm.

The interesting point is even though every $l_p$-norm is all look  very similar to each other, their mathematical properties are very different and thus their application are dramatically different too. Hereby we are going to look into some of these norms in details.

l0-norm

The first norm we are going to discuss is a $l_0$-norm. By definition, $l_0$-norm of $x$ is

$\left \| x \right \|_0 = \sqrt[0]{\sum_{i}x_i^0}$

Strictly speaking, $l_0$-norm is not actually a norm. It is a cardinality function which has its definition in the form of $l_p$-norm, though many people call it a norm. It is a bit tricky to work with because there is a presence of zeroth-power and zeroth-root in it. Obviously any $x>0$ will become one, but the problems of the definition of zeroth-power and especially zeroth-root is messing things around here. So in reality, most mathematicians and engineers use this definition of $l_0$-norm instead:

$\left \| x \right \|_0 = \#(i | x_i \neq 0)$

that is a total number of non-zero elements in a vector.

Because it is a number of non-zero element, there is so many applications that use $l_0$-norm. Lately it is even more in focus because of the rise of the Compressive Sensing scheme, which is try to find the sparsest solution of the under-determined linear system. The sparsest solution means the solution which has fewest non-zero entries, i.e. the lowest $l_0$-norm. This problem is usually regarding as a optimisation problem of $l_0$-norm or $l_0$-optimisation.

l0-optimisation

Many application, including Compressive Sensing, try to minimise the $l_0$-norm of a vector corresponding to some constraints, hence called “$l_0$-minimisation”. A standard minimisation problem is formulated as:

$min \left \| x \right \|_0$ subject to $Ax = b$

However, doing so is not an easy task. Because the lack of $l_0$-norm’s mathematical representation, $l_0$-minimisation is regarded by computer scientist as an NP-hard problem, simply says that it’s too complex and almost impossible to solve.

In many case, $l_0$-minimisation problem is relaxed to be higher-order norm problem such as $l_1$-minimisation and $l_2$-minimisation.

l1-norm

Following the definition of norm, $l_1$-norm of $x$ is defined as

$\left \| x \right \|_1 = \sum_{i} \left | x_i \right |$

This norm is quite common among the norm family. It has many name and many forms among various fields, namely Manhattan norm is it’s nickname. If the $l_1$-norm is computed for a difference between two vectors or matrices, that is

$SAD(x_1,x_2) = \left \| x_1-x_2 \right \|_1 = \sum \left | x_{1_i}-x_{2_i} \right |$

it is called Sum of Absolute Difference (SAD) among computer vision scientists.

In more general case of signal difference measurement, it may be scaled to a unit vector by:

$MAE(x_1,x_2) = \frac{1}{n} \left \| x_1-x_2 \right \|_1 = \frac {1} {n} \sum \left | x_{1_i} - x_{2_i} \right |$ where $n$ is a size of $x$.

which is known as Mean-Absolute Error (MAE).

l2-norm

The most popular of all norm is the $l_2$-norm. It is used in almost every field of engineering and science as a whole. Following the basic definition, $l_2$-norm is defined as

$\left \| x \right \|_2 = \sqrt{\sum_{i}x_i^2}$

$l_2$-norm is well known as a Euclidean norm, which is used as a standard quantity for measuring a vector difference. As in $l_1$-norm, if the Euclidean norm is computed for a vector difference, it is known as a Euclidean distance:

$\left \| x_1-x_2 \right \|_2 = \sqrt{\sum_i (x_{1_i}-x_{2_i})^2}$

or in its squared form, known as a Sum of Squared Difference (SSD) among Computer Vision scientists:

$SSD(x_1,x_2) = \left \| x_1-x_2 \right \|_2^2 = \sum_i (x_{1_i}-x_{2_i})^2$

It’s most well known application in the signal processing field is the Mean-Squared Error (MSE) measurement, which is used to compute a similarity, a quality, or a  correlation between two signals. MSE is

$MSE(x_1,x_2) = \frac{1}{n} \left \| x_1-x_2 \right \|_2^2 = \frac{1}{n} \sum_i (x_{1_i}-x_{2_i})^2$

As previously discussed in $l_0$-optimisation section, because of many issues from both a computational view and a mathematical view, many $l_0$-optimisation problems relax themselves to become $l_1$– and $l_2$-optimisation instead. Because of this, we will now discuss about the optimisation of $l_2$.

l2-optimisation

As in $l_0$-optimisation case, the problem of minimising $l_2$-norm is formulated by

$min \left \| x \right \|_2$ subject to $Ax = b$

Assume that the constraint matrix $A$ has full rank, this problem is now a underdertermined system which has infinite solutions. The goal in this case is to draw out the best solution, i.e. has lowest $l_2$-norm, from these infinitely many solutions. This could be a very tedious work if it was to be computed directly. Luckily it is a mathematical trick that can help us a lot in this work.

By using a trick of Lagrange multipliers, we can then define a Lagrangian

$\mathfrak{L}(\boldsymbol{x}) = \left \| \boldsymbol{x} \right \|_2^2+\lambda^{T}(\boldsymbol{Ax}-\boldsymbol{b})$

where $\lambda$ is the introduced Lagrange multipliers. Take derivative of this equation equal to zero to find a optimal solution and get

$\hat{\boldsymbol{x}}_{opt} = -\frac{1}{2} \boldsymbol{A}^{T} \lambda$

plug this solution into the constraint to get

$\boldsymbol{A}\hat{\boldsymbol{x}}_{opt} = -\frac{1}{2}\boldsymbol{AA}^{T}\lambda=\boldsymbol{b}$

$\lambda=-2(\boldsymbol{AA}^{T})^{-1}\boldsymbol{b}$

and finally

$\hat{\boldsymbol{x}}_{opt}=\boldsymbol{A}^{T} (\boldsymbol{AA}^{T})^{-1} \boldsymbol{b}=\boldsymbol{A}^{+} \boldsymbol{b}$

By using this equation, we can now instantly compute an optimal solution of the $l_2$-optimisation problem. This equation is well known as the Moore-Penrose Pseudoinverse and the problem itself is usually known as Least Square problem, Least Square regression, or Least Square optimisation.

However, even though the solution of Least Square method is easy to compute, it’s not necessary be the best solution. Because of the smooth nature of $l_2$-norm itself,  it is hard to find a single, best solution for the problem.

In contrary, the $l_1$-optimisation can provide much better result than this solution.

l1-optimisation

As usual, the $l_1$-minimisation problem is formulated as

$min \left \| x \right \|_1$ subject to $Ax = b$

Because the nature of $l_1$-norm is not smooth as in the $l_2$-norm case, the solution of this problem is much better and more unique than the $l_2$-optimisation.

However, even though the problem of $l_1$-minimisation has almost the same form as the $l_2$-minimisation, it’s much harder to solve. Because this problem doesn’t have a smooth function, the trick we used to solve $l_2$-problem is no longer valid.  The only way left to find its solution is to search for it directly. Searching for the solution means that we have to compute every single possible solution to find the best one from the pool of “infinitely many” possible solutions.

Since there is no easy way to find the solution for this problem mathematically, the usefulness of $l_1$-optimisation is very limited for decades. Until recently, the advancement of computer with high computational power allows us to “sweep” through all the solutions. By using many helpful algorithms, namely the Convex Optimisation algorithm such as linear programming, or non-linear programming, etc. it’s now possible to find the best solution to this  question. Many applications that rely on $l_1$-optimisation, including the Compressive Sensing, are now possible.

There are many toolboxes  for $l_1$-optimisation available nowadays.  These toolboxes usually use different approaches and/or algorithms to solve the same question. The example of these toolboxes are l1-magic, SparseLab, ISAL1,

Now that we have discussed many members of norm family, starting from $l_0$-norm, $l_1$-norm, and $l_2$-norm. It’s time to move on to the next one. As we discussed in the very beginning that there can be any l-whatever norm following the same basic definition of norm, it’s going to take a lot of time to talk about all of them. Fortunately, apart from $l_0$-, $l_1$– , and $l_2$-norm, the rest of them usually uncommon and therefore don’t have so many interesting things to look at. So we’re going to look at the extreme case of norm which is a $l_{\infty}$-norm (l-infinity norm).

l-infinity norm

As always, the definition for $l_{\infty}$-norm is

$\left \| x \right \|_{\infty} = \sqrt[\infty]{\sum_i x_i^{\infty}}$

Now this definition looks tricky again, but actually it is quite strait forward. Consider the vector $\boldsymbol{x}$, let’s say if $x_j$ is the highest entry in the vector  $\boldsymbol{x}$, by the property of the infinity itself, we can say that

$x_j^{\infty}\gg x_i^{\infty}$ $\forall i \neq j$

then

$\sum_i x_i^{\infty} = x_j^{\infty}$

then

$\left \| x \right \|_{\infty} = \sqrt[\infty]{\sum_i x_i^{\infty}} = \sqrt[\infty]{x_j^{\infty}} = \left | x_j \right |$

Now we can simply say that the $l_{\infty}$-norm is

$\left \| x \right \|_{\infty} = max(\left | x_i \right |)$

that is the maximum entries’ magnitude of that vector. That surely demystified the meaning of $l_{\infty}$-norm

Now we have discussed the whole family of norm from $l_0$ to $l_{\infty}$, I hope that this discussion would help understanding the meaning of norm, its mathematical properties, and its real-world implication.

Mathematical Norm – wikipedia

Mathematical Norm – MathWorld

Michael Elad – “Sparse and Redundant Representations : From Theory to Applications in Signal and Image Processing” , Springer, 2010.

Linear Programming – MathWorld

Compressive Sensing – Rice University

Edit (15/02/15) : Corrected inaccuracies of the content.

## 98 thoughts on “l0-Norm, l1-Norm, l2-Norm, … , l-infinity Norm”

1. Aras says:

That was useful

2. Fang says:

There are many proper nouns that appear in papers frequently and now I finally understand the meaning of these nouns.

3. Brian says:

Thank you for explaining this so simply!

4. Juan Liu says:

Very clear explanations, which is so helpful. Thanks a lot~~~~

5. CSLIN says:

6. ratnesh says:

Fabulous. It cleared so many doubts I had in L_inf and L_0 ..

7. Martin says:

Great article! If it just would have been that clear during my pattern recognition lectures…….

8. MK says:

Brilliant explanation

9. Lee Ying says:

Thanks a lot, it is very clear explanation.

10. RobbieJ says:

Great explanation. I have also seen the use of L2/3-NORM in some Compressed Sensing work I just read and wondered if you wanted to expand on why this might be used.

11. humblesoul says:

excellent writeup….

12. Lorin Ahmed says:

Thank you very much for this crystal clear explanation. You made my life easier.

13. seema says:

Thank you very much, very helpful !!!!

14. Sam says:

Clarifing and useful! Thank you.

15. gigi says:

thanks, clear and neat article

16. jayesh Ruikar says:

Very nice article.

17. aaaaaa says:

many thanks for that , it helps me surely

18. faroq says:

Thank you,

19. Kamal says:

Thank you, it is a very well written article. It helped me a lot.

20. kurakar says:

Thanks. It was really helpful and cleared my doubts

21. leo wang says:

very nice article! good writing, hope to see more!!!

22. amulya says:

this is the bestttt way to explain them…THANK you

23. praful says:

really nice..:)

24. larryy says:

Great overview. Thanks!

Thanks a lot. it was very helpful and interesting.

26. Qi says:

Quite clear for me.Thanks~

27. Very informative and nicely explained article..

Thank you for posting

28. Praveen says:

Could anyone please tell me how L1 norm gives sparse solutions or L1 norm is best suitable for sparse solutions?

I also read somewhere that, more is the norm value (such as, L1, L2,L3….) more it tries to fit the outliers. What quantity in the mathematical expression of the norms makes it to behave like that?

Thank you.

Praveen

1. Netra Lokhande says:

Thanks

29. kalai says:

perfect understanding that is why clear explanation is given… thank you for this nice interpretation

30. Stumbled upon this blog when I came across matrix norm in papers I read, and, since I am now in the early phase of my PhD, I’m in a way happy if I find other PhD students in other part of the world having blogs :D. I’m even thinking to write more meaningful blog posts. Anyway, good luck! 😀

31. Fajri says:

Thanks! great article with clear & easily understood explanation

32. Chris says:

33. juned says:

34. vivek says:

35. Monika says:

A complicated thing made simple… Did not understand the concept till now , but now its too clear.

36. Rolf says:

great summary about the norms. Thanks a lot.

37. Netra Lokhande says:

Very informative.It has helped me a lot.

Thanks

38. ram das says:

thanks alot

39. Thank You very much for this detail and simple introductory explanation…..

40. Ahmed says:

thank you a lot it is very helpfull

41. SP says:

Greatly appreciated! Thank you very very much. You may have just saved me on a qualification exam for my degree 🙂

42. Suhail says:

Good Work All norms with their all possible inferences at one place. Thank you

43. sunu says:

can u explain norms with respect to 2D matrix?? ie L^0,L^1,L^2 norm of a matrix..and to how to find out these norms…

44. Nikolas says:

Thank you!

45. jonas says:

Great, thanks!

46. Fabiano B. M. Silva says:

Thank you for this clear explanation!

47. The l0 norm in compressed sensing is not actually a norm. Please add that in your description. Thank you! 🙂

48. Renjith says:

As a compressive sensing enthusiast, it was really useful for me. Thanks a lot 🙂

49. Nitesh Jain says:

Thank you for these awesome article

50. Deepesh says:

Great Article !

51. rajib says:

thanks a lot!

52. Kareth says:

Thank u very much, 😀

53. Michael says:

Great article, thanks a lot!!! Question: what represents axis (x and y) in graph which shows l1 and l2 solutions?

1. rorasa says:

Axis x and y represent 2 elements (x1,x2) of a tuple (2-dimensional vector) while the blue line is the set of possible solution of a system of equation on a plane.

54. Please make it clear to your readers that the l0 norm *is not a norm*. It satisfies only two of the three necessary properties of a norm: that it is zero only for the all-zero vector, and the triangle inequality. It is not, however, positively homogeneous. And unlike all true norms, it is not convex. Calling it a “norm” sews confusion. It is the *cardinality function*.

55. gunjan says:

How to normalize columm of matrix to have unit l2 norm?

56. helalfy says:

Thank you very much. Great introductory post for newbies like myself. Now I can read papers!

57. Reblogged this on kezpitt and commented:
A comprehensive descriptions on Norm

58. lavank says:

Thank you very much.. its really useful

59. asv says:

Great article…Thank you very much

60. César Castellanos says:

excellent

61. Mary Diana Sebastian says:

Excellent explanation

62. C-Wizzle says:

thx bb

63. Thanks for the article. I was just confused by the different norms in the literature. Now they all make sense to me!

64. Awesome! So the norm’s optimisation condition is only subject to linear equation like Ax+b=0? How about linear inequality or non-linear equality?

65. Anusha says:

Thank you for such a nice explanation, helped certainly!

66. Margarita says:

Thank you very much for the article! It gives a gentle introduction to the subject – very helpful after all those unfamiliar painful mathematical expressions I ran into.

67. Abdelghany says:

Thanks

68. Lale says:

This is so very useful. Thank you very much. By the way, what is the exact application of L1-norm in optimization problems? You said applying L0-norm induce sparsity to the solution. What about L1 norms? Thank you so very much.

1. rorasa says:

Hi, I’m glad that you find this useful.
The most obvious application for the L1-norm is to replace the L0-norm problem. While minimising the L0-norm is literally maximising the sparsity, the problem itself is very hard to solve using any algorithms. L1-norm problem on the other hand has many efficient solvers available.

69. Noah Ryan says:

This article cleared up the L infinity norm for me, so thank you for that!
I do want to mention that NP-hard doesn’t mean that its necessarily difficult to solve. It may be difficult to solve, may be easy to solve but difficult to solve efficiently, or not even be solvable (not decidable for example). I’m not sure the details of the particular problem you mentioned, I’m just pointing out that NP-hard is more subtle than saying its problems are complex or hard to solve.

70. Yashwant Kurmi says:

wow that’s great way to make the things simple.

71. aram says:

wowww this could’nt be more useful.thanks a million 🙂

72. Hello,
I’m curious about what the “size” means in the MAE in l1 norm explanation, how do i get the size?

1. rorasa says:

The size of x means the length or the number of elements of the vector x.

73. process control illiterate says:

THANK YOU!

74. Abhishek Aich says:

Nice article for beginners.

75. jayesh says:

Nice explanation. one point I did not understand is the point that you mention about the matrix system being underdetermined in case of the L2 optimization. Will be a great help if you could clarify.

Thanks.

76. Raja Azmat Abbas says:

Crystal clear explanation. Thank you

77. Marina M says:

Great article! Really appreciated, thank you!!

78. ajay khetan says:

It is really a wonderful article. i am working on L0 gradient minimisation. I want to know what does weight of a L0 norm signifies?? How one chooses its value?

79. excellent explanation of the norm family. I totally understand the meaning of norm.

80. Alejandro says: