07 Apr 2017
Suppose a directed graph. We will define “hole” as a node that doesn’t point at any node and it’s pointed by every node of the graph (except itself).
The task is to find all the holes in a graph, given a common adjacency matrix or adjacency list. A common one. No data structure changes are allowed. And we want to find it in linear time, . Ambitious, isn’t it?
By the way, maybe you prefer to think about how to solve it before seeing the solution. It’s neither trivial nor impossible.
First of all, we should analyze the problem, so maybe we can make it simpler. We are searching holes, but, how many holes can exist in a graph? Actually, only one. Why?
Remember the definition of hole. It’s pointed by every node and it doesn’t point at any node. Suppose we had two holes, A and B. Because they are holes, they must be pointed by all the nodes. Which means that A must be pointed by B and vice versa. But if it they point at each other they are pointing to a node, so they can’t be holes.
First lemma: A graph has one or none holes.
30 Mar 2017
For some reason, the sum of the inverse of the square of all natural positive numbers is . This sum is also called the function ζ(2) of Riemann and the search of its value the Basel problem.
In this article, I cover the proof of this equality step by step. For understanding this proof, no advanced mathematical knowledge is required, the advanced topics required for the proof will be explained further ahead.
19 Mar 2017
The world is full of interesting data structures. Today I want to cover the Bloom Filter, which is actually quite simple but it took me a bit to get the idea.
Basically, is a probabilistic data structure that using a significant small amount of space let us know if an element is in a set. Well not exactly that. It can tell us two things:
- The element may be in the set.
- The element is definitely not in the set.
And that’s all it can do well.
Uhm, definitely it looks curious. “May be in the set?” Usually we work with data structures that tell us things for sure. But that is the cost of using much less space. Let’s have a look.
16 Mar 2017
For some reason, everybody is obsessed with primes. Well, not everybody. Rather mathematicians and someone else. And an interesting problem with primes is how to generate them fast. Really fast. One of the proposals to reach this goal, which is pretty recent, is the Sieve of Atkin.
This sieve has in my opinion too much mathematics concepts for someone who doesn’t have a good mathematical knowledge, like me. That’s why I’m going to try to explain it with “easy” terms.
If you are interested in this algorithm or the concepts that it covers I beg you to take a look at the paper published by Atkin and Bernstein which is much more complete (although much more complex).
07 Mar 2017
If you have worked with graphs you’ve probably made use of an adjacency matrix. But if your graph is undirected, you can notice that the element [i][j] is equal to [j][i]. Something similar happens when doing DFA Minimization. Unfortunately, this way we are “wasting” half of the memory and it doesn’t sound good.
So what we would like to have is a data structure that works exactly the same way but using half of the memory. We can do it easily just with a bit of calculus. By the way, in a graph we don’t usually need the diagonal but in my example I’m going to consider it to make it simpler.