Neurons--growth control, image processing, robust features.


Control of the growth of neurons

There might be two factors which control the growth of the dendrites and neurons. One of these factors is recognized, and it is a positive factor: the more neurons are stimulated the more they grow. The other factor, negative, is only my (naive?) conjecture: as a part of their maintenance the neurons expel some substance outside to the surrounding neighborhood (tissue). This expelled substance stifles the growth of the dendrites and neurons--the higher the concentration of the expelled substance the lower the chance of growing a new dendrite.

A combination of a negative and a positive factor is a very efficient method of controlling, makes the process stable.

Each controlling force can be modeled and studied mathematically separately. They can be modeled also together. All three models should be useful. The combined model should approximate the reality the best.

The positive factor can be modeled by so-to-speak ordinary mathematics: combinatorial and algebraic manipulation of elementary functions, plus a bit of elementary mathematical analysis. The negative factor--the concentration (percentage) of the expelled matter--can be described by a linear partial differential equation of first degree (like heat equation) or rather flow (like heat propagation), and can be modelled, using a computer (preferably parallel image processor) by the corresponding covariant finite window linear transformation of first degree.

Combinatorial tree (graph)

The challenge of recovering a combinatorial structure from a fuzzy picture occurs in several applications. In the case of VLSI design, it seems an absolutely necessary task (at least for a given approach). And there are strong methods (tools) to decide each time whether or not the recovery has succeeded.

In the case of other applications like fingerprint identification and neuron studies the importance of the task is perhaps exaggerated. First of all, there is no certainty that the recovery program has succeeded. Thus, there is a danger that relying on the obtained combinatorial structure can be prone to several errors, even gross errors, in different directions. Fortunately, it is not necessary to use the combinatorial structure for the purpose of obtaining (computing) certain characteristics of the studied objects (say, of the neuron). Direct methods of image processing, without the detour via the tree or other combinatorial structure, can be more efficient and less prone to errors, especially large errors. Furthermore--and possibly it is even more important--one should creatively define and study features which are based directly on the original data (on the images, which we do have) and not on the recovered combinatorial structure.

Feature robustness. Example.

The recovered combinatorial structure is not robust--small procedure errors may cause huge discrepancies. Features should be robust. Here is an example:

EXAMPLE (Neuron density)

Let's say that there is a 300×300 pixels image of a neuron, which amounts to 90K pixels. Let's consider the following three-step procedure, which is done extremely fast by a parallel image processor (even for a much larger image); and only the first step is prone to a minor error of no dramatic consequence:

  1. An image processing program may identify the pixels, call them n-pixels, which belong to the neuron; say, there are  N := 8K  of them. This number is robust in the following sense: if this program is wrong about 200 times, then the number is wrong by a maximum of 2.5% (in general less, because the errors cancel out). For non-robust features, it would be a disaster. In this example, it's only a minor nuisance.
  2. Select certain  real r > 0. Identify the r-neighborhood of the domain of n-pixels, i.e. identify all pixels which are not further than distance  r  from the nearest n-pixel--this is done as the union of the r-balls (disks) around the n-pixels. Let's say that the r-neighborhood area consists of  A := 56K.
  3. Now we compute the r-density of the neuron:

    dr  :=  N / A

    This is precisely a parameter which fits the growth-stifling issue because it is the average number of neuron pixels per one neighborhood pixel, counted around the neuron pixels only (not around the other pixels).