FCRC talks

July 10, 2015

Originally posted on Mental Wilderness:

I was at FCRC (the CS conference conglomerate that happens once every 4 years), June 13-19. Here are some of the talks I found particularly memorable.

Personal notes on FCRC talks are at https://workflowy.com/s/wkI79JfN0N and on STOC/CCC/EC talks (very rough) are at https://dl.dropboxusercontent.com/u/27883775/wiki/math/pdfs/stoc.pdf. Note that neither has been edited.

FCRC award/plenary talks

  • Turing award lecture (The Land sharks are on the squawk box), Michael Stonebraker (https://www.youtube.com/watch?v=BbGeKi6T6QI): Stonebraker draws parallels between his work in building Postgres, a relational database system, and his cross-country bike trip. He described numerous challenges and how they were overcome in both situations, concluding that they were both about “making it happen”.
  • Interdisciplinary research from the view of theory, Andrew Yao: Yao comes from the viewpoint of theoretical computer science but has then worked on connections to physics (quantum computation), economics (auction theory), and cryptography (certifiable randomness). Theoretical computer science started with computability…

View original 1,006 more words

Paper of the day 03/31/15

April 1, 2015

Paper of the day:

Representations of real numbers as sums and products Liouville Numbers

by P . Erdős

Monte Carlo Methods

March 25, 2015


Paul Erdös’s 102-ennial

February 27, 2015

Originally posted on in theory:

Paul Erdös would be 102 year old this year, and in celebration of this the Notices of the AMS have published a two-part series of essays on his life and his work: [part 1] and [part 2].

Of particular interest to me is the story of the problem of finding large gaps between primes; recently Maynard, Ford, Green, Konyagin, and Tao solved an Erdös $10,000 question in this direction. It is probably the Erdös open question with the highest associated reward ever solved (I don’t know where to look up this information — for comparison, Szemeredi’s theorem was a $1,000 question), and it is certainly the question whose statement involves the most occurrences of “$latex log$”.

View original

Deep learning for chess

January 15, 2015

Very interesting post on how to construct a simple neural network for chess AI.


SUIF publications

January 15, 2015

Papers from the Stanford Compiler Group


The Krein-Milman Theorem

January 14, 2015

Originally posted on Nirakar Neo's Blog:

1. The Krein-Milman theorem in Locally Convex Spaces

My project work this semester focuses to understand the paper the Krein-Milman Theorem in Operator Convexity by Corran Webster and Soren Winkler, which appeared in the Transactions of the AMS [Vol 351, #1, Jan 99, 307-322]. But before reading the paper, it is imperative to understand the (usual) Krein-Milman theorem which is proved in the context of locally convex spaces. My understanding of this part follows the book A Course in Functional Analysis by J B Conway. To begin with we shall collect the preliminaries that we shall need to understand the Krein-Milman theorem.

1.1. Convexity

Let $latex {mathbb{K}}&fg=000000$ denote the real($latex {mathbb{R}}&fg=000000$) or the complex($latex {mathbb{C}}&fg=000000$) number fields. Let $latex {X}&fg=000000$ be a vector space over $latex {mathbb{K}}&fg=000000$. A subset of a vector space is called convex if for any two points in the subset, the line segment joining them…

View original 3,073 more words

Actions do change the world.

January 11, 2015

Originally posted on Quantum Frontiers:

I heard it in a college lecture about Haskell.

Haskell is a programming language akin to Latin: Learning either language expands your vocabulary and technical skills. But programmers use Haskell as often as slam poets compose dactylic hexameter.*

My professor could have understudied for the archetypal wise man: He had snowy hair, a beard, and glasses that begged to be called “spectacles.” Pointing at the code he’d projected onto a screen, he was lecturing about input/output, or I/O. The user inputs a request, and the program outputs a response.

That autumn was consuming me. Computer-science and physics courses had filled my plate. Atop the plate, I had thunked the soup tureen known as “XKCD Comes to Dartmouth”: I was coordinating a visit by Randall Munroe, creator of the science webcomic xkcd, to my college. The visit was to include a cake shaped like the Internet, a robotic velociraptor, and

View original 249 more words

Dual spaces

January 11, 2015

Originally posted on lim Practice= Perfect:

Suppose $latex {(A,||cdot||_A)}&fg=000000$ and $latex {(B,||cdot||_B)}&fg=000000$ are Banach space, $latex {A^*}&fg=000000$ and $latex {B^*}&fg=000000$ are their dual spaces. If $latex {Asubset B}&fg=000000$ with $latex {||cdot||_Bleq C||cdot||_A}&fg=000000$, then

$latex displaystyle i:Amapsto B&fg=000000$

$latex displaystyle quad xrightarrow x&fg=000000$

is an embedding. Let us consider the relation of two dual spaces. For any $latex {fin B^*}&fg=000000$

$latex displaystyle |langle f,xrangle|=|f(x)|leq ||f||_{B^*}||x||_Bleq C||f||_{B^*}||x||_Aquad forall, xin A&fg=000000$

Then $latex {f|_{A}}&fg=000000$ will be a bounded linear functional on $latex {A}&fg=000000$

$latex displaystyle i^*:B^*mapsto A^*&fg=000000$

$latex displaystyle qquad frightarrow f|_A&fg=000000$

is a bounded linear operator.

In a very special case that $latex {A}&fg=000000$ is a closed subset of $latex {B}&fg=000000$ under the norm $latex {||cdot||_B}&fg=000000$, one can prove $latex {i^*}&fg=000000$ is surjective. In fact $latex {forall,gin A^*}&fg=000000$ can be extended to $latex {bar{g}}&fg=000000$ on $latex {B}&fg=000000$ by Hahn-Banach thm such that $latex {i^*bar{g}=g}&fg=000000$. Then

$latex displaystyle A^*=B^*/ker i^*.&fg=000000$

Let us take $latex {A=H^1_0(Omega)}&fg=000000$ and $latex displaystyle B=H^1(Omega)&fg=000000$…

View original 76 more words

Machine Learning School, Cambridge 2009

January 6, 2015

Old but very explanatory:


Topics titles

1) Introduction to Bayesian Inference

2) Graphical Models

3) Markov Chains and Monte Carlo

4) Information Theory

5) Kernel Methods

6) Approximate Inference

7) Topic Models

8) Gaussian Processes

9) Convex Optimization

10) Learning Theory

11) Computer Vision

12) Nonparametric Bayesian Models

13) Machine Learning and Cognitive Science

14) Reinforcement Learning

15) Foundations of Nonparametric Bayesian Methods

16) Deep Belief Networks

17) Particle Filters

18) Causality

19) Information Retrieval

20) Bayesian or Frequentist? Which Are You?


Get every new post delivered to your Inbox.