A tiny Exercise: Limit of a Simple Function

Recently, somebody asked me, if it is possible, to compute the limit of the following function, when tends towards infinity:

As it turns out, it is possible and actually not that difficult. You can try it yourself and then check the solution afterwards in this post.

Read more

Don't Drink and Derive: A Simple Proof that 1 = 2

Let \begin{align} a,b,c \in \mathbb{R} \backslash { 0}. \end{align}

Now let us define a simple equation

and play around with it a little bit.

Read more

Computing Large Powers in 6502 Assembler

Recently, I had to compute large powers of integers in 65(C)02 assembler. This is actually not that difficult. However, the code can run rather slow, if not implemented efficiently. For example, one can trivially implement the power as a sequence of multiplications as:

This approach requires multiplications and can take quite some time to compute for large , especially since the 65(C)02 processor does not have a multiplication unit and multiplications have to be implemented using the Booth-algorithm or something similar.

We can do much better than this, if we utilize the following observation:

Read more

A few Notes on Principal Component Analysis

\(
\def\myX{\mathbf{X}} \def\mySigma{\mathbf{\Sigma}} \def\myT{\mathsf{T}} \)

The main idea of principal component analysis (PCA) is to perform an orthogonal transformation of multi/high-dimensional numeric data so that the resulting data consists of linearly uncorrelated variables, which are called principal components. The first principle component (first variable) accounts for the most variability in the data (hence, the variance along this axis is the largest). The second principal component accounts for the highest possible variance in the data under the constraint that it is orthogonal to the first principal component and so on. Hence, the last principal component (which is orthogonal to all previous principal components) accounts for the least variability in the data. A common application of PCA is dimensionality reduction, where one omits the last principal components of the PCA-transformed data, since these variables usually only explain very little of the variance in the data and do not have significant predictive power. But PCA also has many other applications in statistics, exploratory data analysis (EDA), machine learning and neural networks. We will shortly look at one way how to derive PCA in this post.

Read more

The Monkey and Coconut Problem

Three sailors, who live together with a monkey as shipwrecked on an abandoned island, collected one day a heap of coconuts, which is to be divided in the early of the next day among the sailors. Sometime in the night one of the sailors gets up and divides the pile into 3 equal parts. A coconut remains, which he gives to the monkey. Afterwards he hides his share and puts the remaining coconuts together again to a heap. Later that night, the other two sailors also get up and repeat the work of the first sailor. The following morning the three sailors get up, divide the remaining pile into three parts. Again a coconut remains, which they give to the monkey. How many coconuts were originally in the heap?

Also, a general solution is to be found for n coconuts, k sailors and m monkeys.

Read more