What are matrices? (and why matrix multiplication is defined the way it is)

Let \mathbb{R}^4 denotes the set of all column vectors \begin{bmatrix} a \\ b \\ c \\d  \end{bmatrix} as a, b, c, d range over all real numbers (i.e. \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4  \end{bmatrix},\begin{bmatrix} \pi \\ \sqrt{2} \\ 3000 \\ -5  \end{bmatrix},\begin{bmatrix} 10^{10} \\ 1/1000 \\ e \\ 0  \end{bmatrix} all belong in\mathbb{R}^4). Of course this concept is easily generalized to define \mathbb{R}^n for any whole number n.


On \mathbb{R}^n there are two simple things we can always do: add and multiply with some real number (scalar multiplication). These operations are executed term-by-term. For instance in \mathbb{R}^4\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4  \end{bmatrix} +\begin{bmatrix} -3 \\ 2 \\ 5 \\ 1  \end{bmatrix} will result in \begin{bmatrix} 1 - 3 \\ 2+2 \\ 3+5 \\ 4+1  \end{bmatrix} = \begin{bmatrix} -2 \\ 4 \\ 8 \\ 5  \end{bmatrix} and \pi \begin{bmatrix}-3 \\ 2 \\ 5 \\ 1 \end{bmatrix} = \begin{bmatrix} -3\pi \\ 2\pi \\ 5\pi \\ \pi  \end{bmatrix}.


Suppose we have a functionf from \mathbb{R}^2 to \mathbb{R}^3, a natural question to ask is, if I have two vectors say \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \begin{bmatrix} 3 \\ 4 \end{bmatrix}, and I have computedf \left(\begin{bmatrix} 1 \\ 2 \end{bmatrix}\right) and f \left(\begin{bmatrix} 3 \\ 4 \end{bmatrix}\right) can I compute f \left(\begin{bmatrix} \pi \\ 2\pi \end{bmatrix}\right) simply by multiplying f \left(\begin{bmatrix} 1 \\ 2 \end{bmatrix}\right) with \pi. Similarly, can I compute f \left(\begin{bmatrix} 4 \\ 6 \end{bmatrix}\right) just by adding f \left(\begin{bmatrix} 1 \\ 2 \end{bmatrix}\right) and f \left(\begin{bmatrix} 3 \\ 4 \end{bmatrix}\right). There is no reason to believe that this is always true. For example, it can be checked that is is not the case for the function f\left(\begin{bmatrix} x \\ y \end{bmatrix}\right) =  \begin{bmatrix} x^2 \\ y^2 \\ xy \end{bmatrix}. That is, for such f adding two vectors and applying the function is not the same as applying the function to each vector and adding them. Order matters. The same thing can be said about the the order of scalar multiplication and applying functions.


The class of functions from \mathbb{R}^n to \mathbb{R}^m where the order of scalar multiplication, addition, and applying function don’t matter is thus special. We call such functions linear. And although they are special, they are everywhere. Any differentiable function from \mathbb{R}^n to \mathbb{R}^m can be approximated (locally) by linear functions. In fact, before the time when computers can help solve non-linear problems, the main method in dealing with non-linearity is to deal with its linear approximation instead.


Now suppose we have a linear function f, say from \mathbb{R}^2 to \mathbb{R}^3. Also suppose that we know the value of f\left(\begin{bmatrix} 1 \\ 0 \end{bmatrix}\right) to be \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} and f\left(\begin{bmatrix} 0 \\ 1 \end{bmatrix}\right) to be \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}. Where will f send \begin{bmatrix} 3 \\ 5 \end{bmatrix}. Easy:  We know that \begin{bmatrix} 3 \\ 5 \end{bmatrix} =3\begin{bmatrix} 1 \\ 0 \end{bmatrix} +5\begin{bmatrix} 0 \\ 1 \end{bmatrix}. Since the order of scalar multiplication, addition, and applying function don’t matter, we can apply the function to \begin{bmatrix} 1 \\ 0 \end{bmatrix} and \begin{bmatrix} 0 \\ 1 \end{bmatrix}, multiply them by 3 and 5 respectively, and add them together (i.e.f\left(\begin{bmatrix} 3 \\ 5 \end{bmatrix}\right) =3f\left(\begin{bmatrix} 1 \\ 0 \end{bmatrix}\right) +5f\left(\begin{bmatrix} 0 \\ 1 \end{bmatrix}\right) = \begin{bmatrix} 3 \\ 6 \\ 14 \end{bmatrix}). Since every vector in \mathbb{R}^2 can be written as some number times \begin{bmatrix} 1 \\ 0 \end{bmatrix} plus some number times \begin{bmatrix} 0 \\ 1 \end{bmatrix}, knowing where a linear function sends \begin{bmatrix} 1 \\ 0 \end{bmatrix} and \begin{bmatrix} 0 \\ 1 \end{bmatrix} gives us knowledge of where f send every vector in \mathbb{R}^2.


It is easy to generalize: If f is a linear function from \mathbb{R}^3 to \mathbb{R}^m and if we know where f sends \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}, then we know everything there is to know about f. Why stop at 3? The same thing can be said if the domain is \mathbb{R}^4\mathbb{R}^5, and so on.


So let us agree on a better notation, if I want to tell you about a linear function from \mathbb{R}^2 to \mathbb{R}^3, I do not want to write f\left(\begin{bmatrix} 1 \\ 0 \end{bmatrix}\right) = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} and f\left(\begin{bmatrix} 0 \\ 1 \end{bmatrix}\right)= \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} each time. Instead, I will just write a table, where the first column is the image of \begin{bmatrix} 1 \\ 0 \end{bmatrix} under f and the second column is the image of \begin{bmatrix} 0 \\ 1 \end{bmatrix} under f. The table shall looks like

\begin{bmatrix} 1 & 0  \\ 2 & 0 \\ 3 & 1 \end{bmatrix}

and we call this table (unsurprisingly) the matrix representing f. We will use this matrix again, so let us name it [f]. As always, there is no reason to stop at \mathbb{R}^2. I will leave the generalization to the readers. From this discussion, we see that for each linear function there is a matrix representing it, but it is easy to argue the other way: every matrix is representing some linear function. So now we know what a matrix is: a representation of linear function from \mathbb{R}^n to \mathbb{R}^m as an array listing where the functions send \begin{bmatrix} 1 \\ 0 \\ \vdots \\0 \end{bmatrix}\begin{bmatrix} 0 \\ 1 \\ \vdots \\0 \end{bmatrix}, \cdots\begin{bmatrix} 0 \\ 0 \\ \vdots \\1 \end{bmatrix}.


Now we know what matrices are, let us talk about how they are multiplied together. Say we have another linear function g from \mathbb{R}^3 to \mathbb{R}^2 represented by

[g] = \begin{bmatrix} 1 & 0 & 1  \\ 2 & 1 & 0 \end{bmatrix}.

We can compose the functions and get g \circ f from \mathbb{R}^2 to \mathbb{R}^2. It is easy to argue that g \circ f is also a linear function. So a natural question would be “what is the matrix representing g \circ f?”. Well, we only need to know where g \circ f send \begin{bmatrix} 1 \\ 0 \end{bmatrix} and \begin{bmatrix} 0 \\ 1 \end{bmatrix} and put the result on a table. f sends \begin{bmatrix} 1 \\ 0 \end{bmatrix} to \begin{bmatrix} 1 \\ 2 \\3 \end{bmatrix}, and g sends \begin{bmatrix} 1 \\ 2 \\3 \end{bmatrix} to 1\begin{bmatrix} 1 \\ 2 \end{bmatrix} +2\begin{bmatrix} 0 \\ 1 \end{bmatrix} +3\begin{bmatrix} 1 \\ 0 \end{bmatrix}. Similarly, g \circ f sends \begin{bmatrix} 0 \\ 1 \end{bmatrix} to 0\begin{bmatrix} 1 \\ 2 \end{bmatrix} +0\begin{bmatrix} 0 \\ 1 \end{bmatrix} +1\begin{bmatrix} 1 \\ 0 \end{bmatrix}. The end result is the matrix we defined as [g] \times [f], recovering the formula for matrix multiplication we all know and love:

\begin{bmatrix} 1 & 0 & 1  \\ 2 & 1 & 0 \end{bmatrix} \times\begin{bmatrix} 1 & 0  \\ 2 & 0 \\ 3 & 1 \end{bmatrix} = \left[ \begin{array}{c|c} 1\begin{pmatrix} 1 \\ 2 \end{pmatrix} +2\begin{pmatrix} 0 \\ 1 \end{pmatrix} +3\begin{pmatrix} 1 \\ 0 \end{pmatrix} & 0\begin{pmatrix} 1 \\ 2 \end{pmatrix} +0\begin{pmatrix} 0 \\ 1 \end{pmatrix} +1\begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{array} \right]\begin{bmatrix} 4 & 1  \\ 4 & 0 \end{bmatrix}.

This also explains why a m \times n matrix multiplied by a n \times k matrix is a m \times k matrix. The n \times k matrix represents a linear function from \mathbb{R}^k to \mathbb{R}^n and the m \times n matrix represents a linear function from \mathbb{R}^n to \mathbb{R}^m. Their multiplication represents the composition which is a linear function from \mathbb{R}^k to \mathbb{R}^m and hence must be a m \times k matrix.

Kuhn-Tucker Demystified

For some, Kuhn-Tucker method (also known as Karush-Kuhn-Tucker method or KKT for short) seems like a random optimization method that comes out of nowhere. The truth is, KKT multiplier is just a glorified Lagrange multiplier. I do not want to downplay the importance and relevance of KKT multiplier. It has been successfully utilized in a variety of optimization cases in miscellaneous areas. However, its mathematics is far from hard to understand (given that one understands Lagrange multiplier) and I will make my case by looking at an example:

maximize f(x,y)= \sqrt{xy}

subject to

  1. x \geq 0
  2. y \geq 0
  3. 5x+3y \leq 10

The area defined by the constraint can be seen in the following graph:


Let us suppose that we do not know anything about KKT but we are familiar with unconstrained optimization and Lagrange multiplier, how are we going to solve this optimization problem?

One may approach this optimization problem by doing the following steps (I know a sensible person would not do these but bear with me for a second. You can skim step 2 onwards and revisit them later):

  1. Derive f(x,y) with respect to x and y and solve for these equal to 0. If the solution (say (x_1,y_1)) lies in the interior of the triangle, then compute the value of the function on this point and save them (i.e. save (x_1,y_1,f(x_1,y_1))) so that we can compare them with the other possible maxima later to determine the global maximum. Repeat this for all solutions. This will give us all possible local maxima in the interior. For this particular example, there is no point in the interior of the diagram making (f_x,f_y) = (0,0) so there is nothing to save.
  2. Use Lagrange multiplier to get all possible maxima on line segment b (but not necessarily on point A and C due to how Lagrange multiplier work). That is to solve: maximize f(x,y)= \sqrt{xy} subject to 5x+3y = 10 and ignore all the result with x<0 or y<0. A typical way is to define the Lagrange function Lb (x,y,\lambda) =\sqrt{xy} + \lambda(10-5x-3y) and solve Lb_x = Lb_y = Lb_{\lambda} = 0. Then, we’ll save all solutions as well as the value of the functions on these points. For this particular example:
    1. Lb_x = \frac{1}{2} \sqrt{y/x}-5\lambda = 0
    2. Lb_y = \frac{1}{2} \sqrt{x/y}-3\lambda = 0
    3. Lb_{\lambda}=10-5x-3y = 0
    4. The solution is (1,5/3) and the value of the function at this point is \sqrt{5/3}
  3. Similarly, use Lagrange multiplier to get all possible maxima on line c (but not necessarily on point A and B):
    1. Lc=\sqrt{xy} + \mu x
    2. Lc_x = \frac{1}{2} \sqrt{y/x}+\mu = 0
    3. Lc_y = \frac{1}{2} \sqrt{x/y} = 0
    4. Lc_{\mu}=x = 0
    5. There is no solution.
  4. Again, do this for line a (but not point B and C):
    1. La=\sqrt{xy} + \nu y
    2. La_x = \frac{1}{2} \sqrt{y/x} = 0
    3. La_y = \frac{1}{2} \sqrt{x/y}+\nu = 0
    4. La_{\nu}=x =0
    5. There is no solution.
  5.  On point A:
    1. LA=\sqrt{xy} + \mu x +\lambda(10-5x-3y)
    2. LA_x = \frac{1}{2} \sqrt{y/x} + \mu - 5\lambda
    3. LA_y = \frac{1}{2} \sqrt{x/y}- 3\lambda
    4. LA_{\mu}=x
    5. LA_{\lambda} =10-5x-3y
    6. There solution is (0,10/3) and the value of the function here is 0.
  6. On point B: …
  7. On poi.. OK I think you get my point: In general, we need to do a lot of  Lagrange multiplier calculations and all these calculations are very similar. Sure, using Lagrange multiplier seemed like a waste of time in my example above as the function and constraints are pretty easy but usually it would not be this simple and you may have no choice.

This is where KKT makes your life (just a little bit) easier: Notice how all the Lagrange functions and their derivatives are really similar that we basically repeat many calculations over and over again? The idea of KKT is that you only need to define one Lagrange function and derive once. The Lagrange function you need to define is

L =\sqrt{xy} + \mu x + \nu y +\lambda(10-5x-3y);

that is, the Lagrange function when you consider all constraints as equalities. And you will get:

i)L_x = \frac{1}{2} \sqrt{y/x} + \mu - 5\lambda

ii)L_y =\frac{1}{2} \sqrt{x/y}+ \nu - 3\lambda

iii) L_{\mu} = x

iv) L_{\nu} = y

v) L_{\lambda} =10-5x-3y

Why define and derive this function? Because it is really easy to get to all the derivatives in Step 1-7 from here. The idea is, since we have computed Lagrange of all constraints, we can then turn any equality on and off. For instance, if we want to find maxima in the interior just set L_{\mu},L_{\nu},L_{\lambda} >0 (that is x>0, y>0 and 10>5x+3y which means we only consider the pink region inside the triangle) and \mu = \nu = \lambda =0 on equations i-v and you will get the exact same equations we had in Step 1.

On line segment c, we know that the equality sign on constraint 1 has to be satisfied (binding) while the equality signs on constraint 2 and 3 do not have to be satisfied (non-binding). Therefore, to get all possible maxima on line c, just set the following values on equations i-v: L_{\mu} =0, L_{\nu},L_{\lambda} >0 (which just means that x=0, y> 0 and 10>5x+3y) and take \nu = \lambda = 0 and we will end up with the same equations in Step 3. And on point A, just set L_{\mu} =0, L_{\lambda}=0, L_{\nu}>0 and take \nu = 0 and we shall recover equations of Step 5.

Note that in all our example above, whenever \mu,\nu or \lambda equal 0, then L_{\mu},L_{\nu},L_{\lambda} will be non-zero and vice-versa. This condition you get when you turn on/off some constraints is known as complementary slackness. Do this for all possible combinations of \mu,\nu and \lambda where each can be zero or non-zero and then compare the value of the functions on the points we got by solving all these systems of equations. In this example, there are 8 possibilities and in general there 2^n cases where n is the number of constraints. In practice, you can quickly reduce the number by arguing that some cases are impossible. Take the case \mu,\nu and  \lambda all non-zero. Their complements L_{\mu},L_{\nu},L_{\lambda} will all be zero but this means we are considering points in the intersection of the line a,b,c but we know there is no such point.

You may have realized at this point that KKT multiplier is the optimization method that let you derive once, then branches off into multiple systems of equations (by using complementary slackness condition) instead of branching off into multiple Lagrange derivations just to get the same multiple systems of equations. You may even think that you can formulate the KKT multiplier for a more general optimization problem; I really encourage you to do this and then compare your version with the ones available on many lecture notes/textbooks.

Link: A worked example of KKT multiplier

Geometri, (Sepak)Bola, dan Geometri Bola

Apa sih Matematika itu? Mungkin banyak yang akan jawab kalau Matematika itu ilmu soal hitung-hitungan. Nggak sepenuhnya salah sih, tapi nggak sepenuhnya benar juga. Yuk kita ngobrol sedikit tentang sisi Matematika yang bukan hitung-hitungannya. Tenang, dijamin nggak bikin pusing kok obrolannya.

Buat saya, Matematika itu mirip sama olahraga permainan. Olahraga permainan punya aturan dasar dan aturan turunan. Misalnya aja dalam sepakbola ada aturan dasar: “Di dalam lapangan, bola nggak boleh kena tangan pemain selain keeper”. Pasti aturan dasar ini punya konsekuensi logis (kita panggil saja aturan-aturan turunan), contohnya: nggak boleh tangkap bola, nggak boleh pukul bola, nggak boleh nyikut bola dll. Nah kalau kita mau ajak teman yang belum pernah main bola, pasti perlu kasih tahu aturan mainnya. Tapi tentunya kita nggak akan mau kasih tahu semua aturan dasar dan turunannya dong; bisa ada ratusan atau ribuan kalau mau sedetail mungkin. Biar singkat, kita mau cukup kasih tahu aturan dasarnya saja.

Nah, Matematika juga begitu, aturan mainnya banyak tapi seringkali kurang jelas yang mana yang dasar yang mana yang turunan. Atau malah jangan-jangan di Matematika semua aturan itu dasar? Nggak ada yang jadi sebab dan yang jadi akibat? Nah ribuan tahun yang lalu, ada filusuf keren bernama Euclid yang menjawab pertanyaan ini. Dan jawabannya keren banget: Geometri itu cuma punya 5 aturan dasar

  1. “To draw a straight line from any point to any point.”
  2. “To produce [extend] a finite straight line continuously in a straight line.”
  3. “To describe a circle with any centre and distance [radius].”
  4. “That all right angles are equal to one another.”
  5. The parallel postulate: “That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.”
From storyofmathematics.com

Terus kenapa ini keren banget? Paling nggak karena beberapa alasan dan pertanyaan ini nih:

  1. Tentunya geometri jaman sekarang jauh lebih rumit daripada geometri di jaman Euclid ini (sekarang sih nama ngetopnya: Geometri Euclid). Tapi bukan berarti Geometri Euclid nggak punya beragam variasi. Dan semua keragaman itu bisa dirunut balik dari 5 aturan aja. Wow banget kan? Sepakbola juga pasti susah lho untuk cuma punya 5 aturan dasar yang relatif pendek.
  2. Banyak aturan di geometri yang kesannya obvious banget, misalnya yang satu ini: “Dua garis berbeda yang berpotongan pasti hanya berpotongan di satu titik”. Pasti kamu mau bilang “ya iya lah bro”. Nah kenapa bukan aturan ini “ya iyalah” ini yang jadi aturan dasar? Bagaimana Euclid bisa tahu (atau nebak) kalau aturan ini turunan dari 5 aturan di atas? Kenapa 5 di atas yang jadi aturan dasar? Kenapa kok cukup 5 aturan saja?
  3. Bagaimana Euclid tahu kalau dalam 5 ini semuanya adalah aturan dasar? Mungkinkah ada satu aturan yang (entah bagaimana caranya) merupakan turunan dari yang lain? Kenapa nggak bisa cuma 4 aturan? Bagaimana dia yakin bahwa semua turunan dari 5 aturan ini tidak akan ada yang saling kontradiksi?

Nah, dari analogi ini kelihatan juga bedanya Matematika dengan Ilmu Pengetahuan Alam. Walaupun IPA juga mencari aturan dasar paling sederhana (misalnya Hukum Newton yang cuma 3 kalimat atau persamaan Maxwell yang hanya 4 persamaan) tapi aturan Matematika seperti aturan permainan olahraga bukan sesuatu yang benar, tapi cuma dianggap benar. Peran aturan dasar Matematika lebih sebagai aturan main; secara umum belum tentu benar, tapi dalam permainan dianggap sebagai kebenaran. Ini sebabnya kenapa hukum Fisika perlu diuji kebenarannya dengan eksperimen tetapi aksioma Matematika tidak perlu dibuktikan kebenarannya. Jadi kalau ada yang bilang aksioma itu adalah kebenaran yang tidak perlu dibuktikan lagi itu sebenarnya bukan karena aksioma pasti benar, sama seperti “bola tak boleh kena tangan” itu nggak selalu benar. Pasti kalau main bola basket “bola tak boleh kena tangan” itu tidak benar, tapi kalau sedang bermain sepakbola tentunya aturan satu ini nggak perlu kita pertanyakan lagi.

Setelah Euclid, bermain dengan aturan dasar (aksioma) dan coba mendapat aturan turunan (proposisi, teorema, lemma, akibat) jadi pekerjaan utama orang-orang kurang kerjaan keren yang disebut matematikawan, termasuk saya 😀 Mungkin teman-teman sudah biasa dengar matematikawan itu suka mencari teorema dan formula, tapi mungkin nggak banyak yang tahu kalau kami juga suka mencoba buat permainan baru dengan bola dan lapangan (objek-objek) baru dan aturan-aturan baru. Ini cukup menantang lho. Tantangannya adalah membuat aturan dan objek olahraga baru yang cukup menarik, tidak boleh terlalu bebas (general) atau restriktif. Harus pas biar menarik dan berguna. Jangan terlalu banyak atau terlalu sedikit aturan. Mendefinisikan operasi matematika tanpa syarat seperti asosiatif atau komutatif ibarat sepakbola tanpa aturan tentang gol dan skor permainan. Pasti nggak seru kan nonton 22 orang yang berlarian berebut bola tanpa tujuan.

Fun Fact: Selama ratusan tahun, banyak ahli yang percaya dan coba buktikan kalau aksioma ke 5 Euclid adalah turunan dari 4 aksioma lainnya. Tapi gimana kalau mau buktikan sebaliknya?  Ambil contoh ini: Permainan Sepakbola1 adalah permainan yang didefinisikan sebagai permainan yang menaati 10 pasal. Misalkan juga Sepakbola2 adalah permainan yang menaati pasal 1-9 dari Sepakbola1. Bisa nggak Sepakbola2 dimainkan berbeda dari Sepakbola1? Kalau bisa, maka pasti pasal 10 bukan turunan dari pasal 1-9, tapi kalau nggak bisa maka pasal 10 adalah konsekuensi dari 1-9. Kalau bingung, kita ambil contoh kalau pasal 9: “Bola tak boleh kena tangan” dan pasal 10: “Tidak boleh lempar bola”. Di contoh ini, Sepakbola2 pasti sama saja dengan Sepakbola1 karena walaupun pasal 10 nggak ada di Sepakbola2 tetap saja pemain nggak boleh lempar bola karena larangan pasal 9.

Mirip seperti kasus Sepakbola 1 dan 2 di atas, ternyata ada lho sistem-sistem Geometri yang memenuhi aturan 1-4 tapi melanggar 5. Artinya pasal 5 Bung Euclid memang bukan konsekuensi dari pasal 1-4. Salah satunya adalah Geometri “Permukaan” Bola (Spherical Geometry). Ternyata jarak terpendek antara dua titik di permukaan bola adalah segmen dari great circle (lingkaran yang membelah bola menjadi 2 bagian sama luas, contohnya garis khatulistiwa dan garis bujur) antara mereka. Karena umumnya kita ingin segmen garis antara 2 titik adalah jalur terpendek yang menghubungkan, maka great circle inilah yang cocok kita definisikan sebagai “garis” di Spherical Geometry. Unik ya? Karena ini artinya jarak terpendek antara 2 titik yang berada di lintang yang sama (tapi bukan di khatulistiwa) pasti bukan melalui garis lintangnya. Dan bisa dicek kalau geometri ini memenuhi aturan 1-4 tapi melanggar aturan 5 karena tiap 2 great circle akan bertemu di tepat 2 titik. Contohnya garis bujur yang pasti bertemu garis bujur lain dan garis khatulistiwa di tepat 2 titik. Salah satu penerapan konsep satu ini adalah navigasi jarak terpendek untuk pesawat terbang, kalau tertarik monggo googling sendiri “Great Circle Navigation” 😉

From timeandnavigation.si.edu

Untuk penjelasan Matematika sederhana lainnya dari saya, silahkan klik di sini

Homological Algebra

There is a subject where Grothendieck’s name is always in the citation
It is a simple and beautiful topic with perplexing notation

Once upon a time there is a colimit named direct limit
Feeling isolated until she met a limit named inverse limit
They met at a category of modules at the edge of a ring

Baffled on why she is a limit of a sequence when she is an initial object
Questioning their uniqueness to her dual whom an inverse limit whilst a terminal object

He convinced her of their uniqueness up to isomorphism
By reciting their universal property of morphism

We are unique
He say,
Yet we are not alone
He assured,

He told her the story about  comohology of a complex
whom definitively is a homology albeit at sixes and sevens as homology is for cocomplex
Yes you are right, they are not even a dual
At the category derived from modules at the edge of a ring

As puzzled as she is
She is happy not being the only one
Also not the most bewildering
As past the morphism
Inside the dark inner part of modules
There are always terminals of the initials
There are always limits of colimits
For the image is the kernel of the cokernel

How To Read And Understand Quick Count Result

This is a corrected version of an article published in Projecting Indonesia.

Being applied on a large scale for the first time in Indonesia by LP3ES in 1997, Quick Count method have been an integral part of Indonesian election. Today, there are a considerable number of survey institutions doing and publishing Quick Count. Generally, the Quick Count result is published simultaneously with some statistical lingos. Keywords such as confidence level, margin of error and sample size as well as various kinds of sampling methods name come side by side with percentages achieved by parties or candidates. Given different jargons on different survey publications, how should we read, understand, and perhaps compare these results?

A complete treatment on this will need rewriting of some part Basic Statistics textbook into this article; therefore this article does not aim for a perfect understanding but only an intuitive idea on the interplay of these key ideas. This writing seeks to be your grain of salt next time you have to take in a Quick Count result, and analyzing this mock example may fit that purpose:

Projecting Indonesia Survey carried out a Quick Count survey for Indonesia Presidential Election 2014. Candidate A achieved 39.2% and Candidate B achieved 60.8% of all legitimate votes. Sample size is greater than 500.000 people in 1000 polling stations, with 95% level of confidence and 1% margin of error.

One simple way of reading this is as follows: On the simplest assumption on the survey and data (Simple Random Sampling etc. etc.), Projecting Indonesia Survey simply calculates the percentage of votes of all peoples in all polling stations that they sampled. The number of all these people is greater than 100.000. Projecting Indonesia Survey is 95% confident that the actual result lies in 1% interval of their result; that is the exact result is somewhere between (A vs. B: 38.2% vs. 61.8%) and (A vs. B: 40.2% vs. 59.8%). Of course they acknowledge that there is 5% chance that the actual result is outside that span (A got less than 38.2% or more than 40.2%).

What does it mean by 95% confident in the above interpretation? At least for the matter of poll survey, this can be understood by saying that: If the survey is replicated multiple times with exactly the same method and condition, then 95% of the time the sampling result will lie inside the margin of error from the actual result. This can also be interpreted as: It is unlikely (5% in this case) for two survey institutions employing the same method at the same condition to have results outside each other double margin of error. Of course 5% is still 1 out of 20, so it can still happen; nonetheless results from different survey institutions that are far off from each other may indicate that something is dubious.

Confidence level is not a measure of accuracy. If any of these concepts were close to accuracy, it would be the margin of error. However, nothing can be concluded from the above information about the true accuracy and bias, and while it is tempting to comment about accuracy of a survey, it is something that should be assessed in comparison with the other Quick Counts and the actual result unless the result discussed is far too absurd (doesn’t make sense or differ very far with opinion polls etc.).

In a world where a coin has 3 sides, sample size, level of confidence, and margin of error are sides of the same coin. Intuitively, the larger our sample is, the more confident we are and vice versa. Similarly, when the sample size becomes greater, the error becomes smaller and so is the opposite. It is also straightforward to argue the if our confidence interval is wide (margin of error is high) the more confident we are and the other way around. For instance, it is easy to feel more confident to assure candidate A true percentage is somewhere in 20-80% stretch than to assure that it is inside the 40-60% interval. That is why there is a formula relating sample size, level of confidence and margin of error. Before the survey is conducted, the survey institution would usually start with level of confidence and margin of error and calculate the sample size based on that formula. However, since they really determine one another, one can start with sample size and confidence level then compute the margin of error. Determine any two, plug in to the formula, and get the third one.

Some other jargons regularly appearing in publication is the sampling technique. Different sampling techniques will not be discussed here, but we will comment that generally a certain sampling technique is applied in order to do one of these: increase accuracy, decrease bias, simplify, systemize, and ease sampling process, as well as having a better data breakdown by area, gender, age etc.

One other information that is very crucial to understand Quick Count result is percentage of incoming data, which is simply the ratio of data the survey institution that has been reported by their field officers to all the data from all polling stations they surveyed. It is important to read the Quick Count result with this ratio in mind as early reports may come from areas where a particular candidate is stronger. Trusting Quick Count result while the incoming data is still far from 100%, especially when the election is still going, is risky.

Despite all the risk of being misinformed, history showed how popular Quick Count results are for Indonesians. Therefore it is vital to be an informed reader. A quote widely attributed to Andrew Lang says, “An unsophisticated forecaster uses statistics as a drunken man uses lampposts; for support rather than for illumination”. One fitting addition to this quote may be “an unsophisticated reader reads statistics literally rather than rationally inclined to skepticism”.

How many times?

Last night, I read this posting from John Baez‘s blog and found a problem stated in that posting. The reason why I am reposting this problem is because this problem is another one that shows mathematical problem is sometimes is just a matter of viewpoint. From a certain viewpoint, there is a very elegant and trivial solution to this problem:

For starters, consider an ordinary ball rolling on another ordinary ball that’s the same size. How many times does the rolling ball turn as it makes a round trip around the stationary one? If you watch this you can see the answer:

Follow the line drawn on the little ball. It turns around not once, but twice!

Next, consider one ball rolling on another whose radius is 2 times as big. How many times does the rolling ball turn as it makes a round trip?

It turns around 3 times.

And this pattern continues! I don’t have animations proving it, so either take my word for it, read our paper, or show it yourself.

So, the assertion is that if a circle is rolling over another circle which radius is n times longer than the first circle, then the first circle will turns around (n+1) times to make a round trip. Try to prove it yourself! 🙂

Fractal Dimension

English: Sierpinski triangle, a fractal having...
English: Sierpinski triangle, a fractal having a Hausdorff dimension of ln3 / ln2 (which is approximately 1.58). (Photo credit: Wikipedia)

Today I sat in a lecture about Hausdorff dimension which is pretty technical and was very nice. However, what I am going to write is my own intuitive view on Hausdorff dimension. First, let us assume the fact that we have “n-dimensional volume” that are length for n=1, area for n=2 and well, volume, for n=3 and something that you can imagine for yourself for higher n. Now one characteristic of n-dimensional object is when you you dilate/magnify them by a factor, say K, then their new volume is K^n times the old volume. For example, take “n-dimensional cube” of volume 1 which are a line segment of length one for n=1, a square of sides one for n=2 and a cube of sides one for n=3. Note that when you dilate these “n-dim cube” by 2, their volume are 2 for n=1, 4=2 x 2 for n=2 and 8= 2 x 2 x2 for n=3. By this simple character, we can try to explain objects of non-integer dimension (regardless of whether they exist or not). For instance, if I want to say an object is a half-dimensional object, then when I dilate the object by a factor of 2 then I want the new volume to be 2^(1/2) of the old volume. One example for this object with non-integer dimension is Sierpinski triangle (the one on the picture). If you magnify the triangle by a factor of 2, then the “volume” of the triangle is 3 times the old volume. This suggests that the Hausdorff dimension of this triangle, is A where 2^A=3, that is A=ln3/ln2. This is of course not a proof, because the proof is somewhat much more complicated. But my point is that you can see this characterization is very intuitive.

The problem is, I wrote the above argument as if I do have the “volume” of this triangle. But as you may probably try to compute yourself, defining the “volume” of this triangle is not a trivial matter.  Hausdorff does this by introducing the so called Hausdorff measure, which is parametrized by dimension. And although the definition of this measure is not trivial at all, the usage of this measure to prove a certain object has a certain dimension is really intuitive. You say a dimension of an object is D when its “A-dimensional volume” is infinite when A<D and its “A-dim volume” is 0 when A>D. Why I say this very intuitive is because when we compare to the discrete dimension again, a 2-dim object has infinite 1-dim volume (length) and 0 3-dim volume (volume). And my point is that with these measure and dimension being so intuitive, it will be easy to remember the definition and although it may hard to prove that a certain object has a certain dimension, computing that dimension is a much easier task.