Artificial Intelligence

From Eyewire
Revision as of 15:56, 17 November 2015 by Nkem test (Talk | contribs) (Marked this version for translation)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The 3D neural images in Eyewire are based off of serial electron microscope (serial EM) images of neurons. What we are doing is taking a series of 2D images, and stacking them on top of each other to create the 3D.

The purpose of the AI we use is to separate neurites (the cell’s branches) from each other. Early EM image tracing was done without computer aid, and it would take over a decade to find the connections between just a few hundred cells. New smart technologies allow the computer to aid us in tracing cells in a more efficient way.

Labeling Individual Objects

In the first stage we begin to label individual objects. Usually we begin with a human tracer who identifies and colors all the objects in a given volume. The results become training data that help develop algorithms the AI will use in future volumes. If we only have a small sample of training data the AI may “overlearn” it and not be able to generalize properly for new data sets. So the more training data we can feed into the AI, the better. Eyewire is a good source of user data that we can utilize to improve our algorithms.

Affinity Graph Labeling

The next stage gives us a more sophisticated way of determining where boundaries lie between cell branches. This is known as “affinity graph labeling.” The affinity graph considers all the material in a given segment and makes decisions about where the boundaries lie between objects. The AI will look at each voxel, or “volume pixel” (imagine a 3D pixel), and determine the probability that it belongs to the same object as it’s neighbors. By determining the “affinity” any given voxel has with all adjacent voxels, the AI can begin to group objects and separate them from one another.

The AI determines boundaries between objects by the “probability” that a voxel belongs to the same object as it’s neighbors. Our images are set at a low threshold of probability that segments are connected. This means that the machine won’t connect two groups of pixels unless it is fairly certain they belong to the same segment, and are bounded by the same border. This helps to minimize mergers, but means this AI is more hesitant to connect large pieces of a cell branch.

Semiautomated Segmentation

One tool we do have to help speed up the construction process is called semiautomated segmentation. This feature allows the computer to piece together branch fragments it believes are correct. This means that the human user has less work to do when he or she manually connects segments. The more the computer learns how to piece together connected segments, the less effort there is for the human user.

References

Turaga, S., Briggman, K., Helmstaedter, M., Denk, W., & Seung, S. "Maximin Affinity Learning of Image Segmentation". 28 Nov 2009 http://arxiv.org/pdf/0911.5372.pdf

Turaga, Srinivas C. et al. “Convolutional Networks Can Learn to Generate Affinity Graphs for Image Segmentation.” Neural Computation 22.2 (2011): 511-538. © 2009 Massachusetts Institute of Technology http://dspace.mit.edu/handle/1721.1/60924