아이와이어에 사용되는 3D 신경세포 영상은 신경세포의 연속전자현미경(serial electron microscope) 영상에 기반한 것입니다. 우리가 하는 것은 일련의 2D 영상을 가져다가 계속 쌓아서 3D 영상을 만드는 것입니다.
우리가 사용하는 인공지능의 목적은 신경돌기(세포의 가지)들을 서로 분리하는 것입니다. 초기의 전자 현미경 영상 추적은 컴퓨터의 도움 없이 실시되었으며 이러한 방법으로는 단지 몇 백 개 세포 사이의 연결을 찾는 데만도 십 년 이상이 걸릴 것입니다. 이제 스마트 기술로, 컴퓨터가 우리를 보다 효율적으로 세포를 추적할 수 있도록 도와주게 되었습니다.
개별 물체의 라벨링(Labeling Individual Objects)
In the first stage we begin to label individual objects. Usually we begin with a human tracer who identifies and colors all the objects in a given volume. The results become training data that help develop algorithms the AI will use in future volumes. If we only have a small sample of training data the AI may “overlearn” it and not be able to generalize properly for new data sets. So the more training data we can feed into the AI, the better. Eyewire is a good source of user data that we can utilize to improve our algorithms.
Affinity Graph Labeling
The next stage gives us a more sophisticated way of determining where boundaries lie between cell branches. This is known as “affinity graph labeling.” The affinity graph considers all the material in a given segment and makes decisions about where the boundaries lie between objects. The AI will look at each voxel, or “volume pixel” (imagine a 3D pixel), and determine the probability that it belongs to the same object as it’s neighbors. By determining the “affinity” any given voxel has with all adjacent voxels, the AI can begin to group objects and separate them from one another.
The AI determines boundaries between objects by the “probability” that a voxel belongs to the same object as it’s neighbors. Our images are set at a low threshold of probability that segments are connected. This means that the machine won’t connect two groups of pixels unless it is fairly certain they belong to the same segment, and are bounded by the same border. This helps to minimize mergers, but means this AI is more hesitant to connect large pieces of a cell branch.
One tool we do have to help speed up the construction process is called semiautomated segmentation. This feature allows the computer to piece together branch fragments it believes are correct. This means that the human user has less work to do when he or she manually connects segments. The more the computer learns how to piece together connected segments, the less effort there is for the human user.
Turaga, S., Briggman, K., Helmstaedter, M., Denk, W., & Seung, S. "Maximin Afﬁnity Learning of Image Segmentation". 28 Nov 2009 http://arxiv.org/pdf/0911.5372.pdf
Turaga, Srinivas C. et al. “Convolutional Networks Can Learn to Generate Affinity Graphs for Image Segmentation.” Neural Computation 22.2 (2011): 511-538. © 2009 Massachusetts Institute of Technology http://dspace.mit.edu/handle/1721.1/60924