Inteligencia Artificial

From Eyewire
Revision as of 23:04, 19 July 2019 by Hawaiisunfun (Talk | contribs) (Created page with "El propósito de la IA que utilizamos es separar las neuritas (las ramas de la célula) unas de otras. El rastreo temprano de la imagen EM se realizó sin ayuda de computadora...")

Jump to: navigation, search

Las imágenes neuronales en 3D de Eyewire se basan en imágenes de microscopio electrónico en serie (EM en serie) de neuronas. Lo que estamos haciendo es tomar una serie de imágenes en 2D y apilarlas una encima de la otra para crear el 3D.

El propósito de la IA que utilizamos es separar las neuritas (las ramas de la célula) unas de otras. El rastreo temprano de la imagen EM se realizó sin ayuda de computadora, y llevaría más de una década encontrar las conexiones entre unos pocos cientos de células. Las nuevas tecnologías inteligentes permiten que la computadora nos ayude a rastrear las células de una manera más eficiente.

Labeling Individual Objects

In the first stage we begin to label individual objects. Usually we begin with a human tracer who identifies and colors all the objects in a given volume. The results become training data that help develop algorithms the AI will use in future volumes. If we only have a small sample of training data the AI may “overlearn” it and not be able to generalize properly for new data sets. So the more training data we can feed into the AI, the better. Eyewire is a good source of user data that we can utilize to improve our algorithms.

Affinity Graph Labeling

The next stage gives us a more sophisticated way of determining where boundaries lie between cell branches. This is known as “affinity graph labeling.” The affinity graph considers all the material in a given segment and makes decisions about where the boundaries lie between objects. The AI will look at each voxel, or “volume pixel” (imagine a 3D pixel), and determine the probability that it belongs to the same object as it’s neighbors. By determining the “affinity” any given voxel has with all adjacent voxels, the AI can begin to group objects and separate them from one another.

The AI determines boundaries between objects by the “probability” that a voxel belongs to the same object as it’s neighbors. Our images are set at a low threshold of probability that segments are connected. This means that the machine won’t connect two groups of pixels unless it is fairly certain they belong to the same segment, and are bounded by the same border. This helps to minimize mergers, but means this AI is more hesitant to connect large pieces of a cell branch.

Semiautomated Segmentation

One tool we do have to help speed up the construction process is called semiautomated segmentation. This feature allows the computer to piece together branch fragments it believes are correct. This means that the human user has less work to do when he or she manually connects segments. The more the computer learns how to piece together connected segments, the less effort there is for the human user.

References

Turaga, S., Briggman, K., Helmstaedter, M., Denk, W., & Seung, S. "Maximin Affinity Learning of Image Segmentation". 28 Nov 2009 http://arxiv.org/pdf/0911.5372.pdf

Turaga, Srinivas C. et al. “Convolutional Networks Can Learn to Generate Affinity Graphs for Image Segmentation.” Neural Computation 22.2 (2011): 511-538. © 2009 Massachusetts Institute of Technology http://dspace.mit.edu/handle/1721.1/60924