Combinatorial lines and fMRI

February 20, 2016

(version 1, will try to expand)

Imagine that you have an alphabet consisting of some letters \lbrace a,b,c, \dots \rbrace . Now imagine another symbol; call it *. Let us say that I know how to construct words; i.e., know how to construct sequences of letters, using only the alphabet. Now a friend of mine can interrupt me at any point of the sequence by shouting *. Suppose now that I only know the letters \lbrace d,o,r,s\rbrace and each time I try to spell a 5-letter word using these letters a friend of mine interrupts me by shouting the letter *. If she is really aggressive, the outcome could be something like ***** . If not I could say something like door*.
A word containing at least one interruption is called a root.
Now imagine that someone else listens to us. If I start saying the word ‘door’ and my friend interrupts me the listener can guess the following:

  1. I said dooro
  2. I said doors
  3. I said doorr
  4. I said doord

This set is called the combinatorial line of the root door*.
Speaking about the previous a little more strictly, let A be a finite letter alphabet. Let * be the new symbol. Words are considered as sequences of letters of the alphabets without containing the letter * . Sequences containing at least one * character are called roots. If in each such root we replace the * with each letter of the alphabet we get a collection of words rooted by the specific root. A combinatorial line is the set of words that stem from simultaneous replacement each time of the characters * by one of the alphabet letters.

Excerice: Suppose that there exists an alphabet consisting of 0 and 1. Calculate the number of combinatorial lines for sequences of length n .
(Solution here)

Now our alphabet A is 0 or 1. What I will do here is play a little bit with combinatorial lines and AAL coordinates. AAL labels are are neuroanatomical labels of the brain (in a brain-coordinate system) commonly used in fMRI. Here I used the 116 predefined labels and coordinates as found in the BrainNet Viewer tool. I will encode the areas based on a specific binary encoding. Obviously they way to assign each area to the aforementioned encoding is completely arbitrary.

For example I can have something like this:

      000 ; Hippocampus


      001 ; Posterior Cingulate


      010 ; …











The pair 010, 111 can be considered as a combinatorial line (with root *1*) whereas the pair 010,101 cannot. What we are going to do is, for every entry in the encoding, we are going to search all other entries and see if as a pair they become a combinatorial line.

What I did for this post is to encode the areas based on their Euclidean distance from a reference point. For each area I calculated the Euclidean distance between its xyz coordinates and the (0,0,0) point. Then I sorted them in an ascending order and used a 8 bit binary encoding. Therefore the 00000000 will be the area with smallest distance from (0,0,0) the 00000001 will be the area with somewhat bigger distance and so on.
What I did afterwards was to find all the combinatorial lines in this encoding for each area. Remember that since we are using binary encoding, a combinatorial line line will be a pair of entries, e.g. 00000000 and 00000001 could be the combinatorial line with root 0000000* (there can be other roots for this pair). Therefore for each area I found all the combinatorial pairs that belong to the encoding.

Having all the combinatorial lines for each each area, I tried to plot some!
In the next pictures (you may need to click and zoom for a better resolution) I depict as the big yellow node the area that I considered. The other nodes stand for the areas that the combinatorial lines of the considered area consist of.

Here is the left precuneus.

Here is the left hippocampus.

I was hoping for a default-mode network but didn’t see a clear picture of it : ). For the precuneus I saw some medial frontal and angular areas as well as some areas of the cerebellum. For the hippocampus I identified the caudate and some angular and orbital areas.

Question: Is there any truly connection between the topology of the brain functional networks and multi-dimensional combinatorial theorems (such as Hales-Jewett)?

FCRC talks

July 10, 2015

Mental Wilderness

I was at FCRC (the CS conference conglomerate that happens once every 4 years), June 13-19. Here are some of the talks I found particularly memorable.

Personal notes on FCRC talks are at and on STOC/CCC/EC talks (very rough) are at Note that neither has been edited.

FCRC award/plenary talks

  • Turing award lecture (The Land sharks are on the squawk box), Michael Stonebraker ( Stonebraker draws parallels between his work in building Postgres, a relational database system, and his cross-country bike trip. He described numerous challenges and how they were overcome in both situations, concluding that they were both about “making it happen”.
  • Interdisciplinary research from the view of theory, Andrew Yao: Yao comes from the viewpoint of theoretical computer science but has then worked on connections to physics (quantum computation), economics (auction theory), and cryptography (certifiable randomness). Theoretical computer science started with computability…

View original post 1,006 more words

Paper of the day 03/31/15

April 1, 2015

Paper of the day:

Representations of real numbers as sums and products Liouville Numbers

by P . Erdős

Monte Carlo Methods

March 25, 2015

Paul Erdös’s 102-ennial

February 27, 2015

in theory

Paul Erdös would be 102 year old this year, and in celebration of this the Notices of the AMS have published a two-part series of essays on his life and his work: [part 1] and [part 2].

Of particular interest to me is the story of the problem of finding large gaps between primes; recently Maynard, Ford, Green, Konyagin, and Tao solved an Erdös $10,000 question in this direction. It is probably the Erdös open question with the highest associated reward ever solved (I don’t know where to look up this information — for comparison, Szemeredi’s theorem was a $1,000 question), and it is certainly the question whose statement involves the most occurrences of “$latex log$”.

View original post

Deep learning for chess

January 15, 2015

Very interesting post on how to construct a simple neural network for chess AI.

SUIF publications

January 15, 2015

Papers from the Stanford Compiler Group

The Krein-Milman Theorem

January 14, 2015

Nirakar Neo's Blog

1. The Krein-Milman theorem in Locally Convex Spaces

My project work this semester focuses to understand the paper the Krein-Milman Theorem in Operator Convexity by Corran Webster and Soren Winkler, which appeared in the Transactions of the AMS [Vol 351, #1, Jan 99, 307-322]. But before reading the paper, it is imperative to understand the (usual) Krein-Milman theorem which is proved in the context of locally convex spaces. My understanding of this part follows the book A Course in Functional Analysis by J B Conway. To begin with we shall collect the preliminaries that we shall need to understand the Krein-Milman theorem.

1.1. Convexity

Let $latex {mathbb{K}}&fg=000000$ denote the real($latex {mathbb{R}}&fg=000000$) or the complex($latex {mathbb{C}}&fg=000000$) number fields. Let $latex {X}&fg=000000$ be a vector space over $latex {mathbb{K}}&fg=000000$. A subset of a vector space is called convex if for any two points in the subset, the line segment joining them…

View original post 3,073 more words

Actions do change the world.

January 11, 2015

Quantum Frontiers

I heard it in a college lecture about Haskell.

Haskell is a programming language akin to Latin: Learning either language expands your vocabulary and technical skills. But programmers use Haskell as often as slam poets compose dactylic hexameter.*

My professor could have understudied for the archetypal wise man: He had snowy hair, a beard, and glasses that begged to be called “spectacles.” Pointing at the code he’d projected onto a screen, he was lecturing about input/output, or I/O. The user inputs a request, and the program outputs a response.

That autumn was consuming me. Computer-science and physics courses had filled my plate. Atop the plate, I had thunked the soup tureen known as “XKCD Comes to Dartmouth”: I was coordinating a visit by Randall Munroe, creator of the science webcomic xkcd, to my college. The visit was to include a cake shaped like the Internet, a robotic velociraptor, and

View original post 249 more words

Dual spaces

January 11, 2015

lim Practice= Perfect

Suppose $latex {(A,||cdot||_A)}&fg=000000$ and $latex {(B,||cdot||_B)}&fg=000000$ are Banach space, $latex {A^*}&fg=000000$ and $latex {B^*}&fg=000000$ are their dual spaces. If $latex {Asubset B}&fg=000000$ with $latex {||cdot||_Bleq C||cdot||_A}&fg=000000$, then

$latex displaystyle i:Amapsto B&fg=000000$

$latex displaystyle quad xrightarrow x&fg=000000$

is an embedding. Let us consider the relation of two dual spaces. For any $latex {fin B^*}&fg=000000$

$latex displaystyle |langle f,xrangle|=|f(x)|leq ||f||_{B^*}||x||_Bleq C||f||_{B^*}||x||_Aquad forall, xin A&fg=000000$

Then $latex {f|_{A}}&fg=000000$ will be a bounded linear functional on $latex {A}&fg=000000$

$latex displaystyle i^*:B^*mapsto A^*&fg=000000$

$latex displaystyle qquad frightarrow f|_A&fg=000000$

is a bounded linear operator.

In a very special case that $latex {A}&fg=000000$ is a closed subset of $latex {B}&fg=000000$ under the norm $latex {||cdot||_B}&fg=000000$, one can prove $latex {i^*}&fg=000000$ is surjective. In fact $latex {forall,gin A^*}&fg=000000$ can be extended to $latex {bar{g}}&fg=000000$ on $latex {B}&fg=000000$ by Hahn-Banach thm such that $latex {i^*bar{g}=g}&fg=000000$. Then

$latex displaystyle A^*=B^*/ker i^*.&fg=000000$

Let us take $latex {A=H^1_0(Omega)}&fg=000000$ and $latex displaystyle B=H^1(Omega)&fg=000000$…

View original post 76 more words


Get every new post delivered to your Inbox.