Home My Page Projects Code Snippets Project Openings diderot
Summary Activity Tracker Tasks SCM

SCM Repository

[diderot] Annotation of /trunk/doc/whitepaper/image.tex
ViewVC logotype

Annotation of /trunk/doc/whitepaper/image.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 233 - (view) (download) (as text)

1 : jhr 233 %!TEX root = paper.tex
2 :     %
3 :    
4 :     \section{Image analysis}
5 :     \label{sec:image}
6 :    
7 :     % provide rationale for why the language looks the way it does
8 :     % e.g. why do we fundmentally need derivatives
9 :     % talk about general structure of these algorithms
10 :    
11 :     An overall goal in image-analysis methods is to correlate the
12 :     physical, biological, and anatomical structure of the objects being
13 :     scanned with the mathematical structure that is accessible in the
14 :     digital image.
15 :     Quantitative analysis of the scanned objects is computed
16 :     from their measured image representations.
17 :     We propose to simplify the
18 :     work of developing, implementing, and applying image-analysis
19 :     algorithms by including fundamental image operations directly in a
20 :     domain-specific language. One elementary operation in a variety of
21 :     analysis algorithms is convolution-based reconstruction of a
22 :     continuous image domain from the discretely sampled values, which
23 :     usefully mimics the underlying continuity of the measured space. A
24 :     related operation is the point-wise measurement of derivatives and
25 :     other locally-determined image attributes, which can support a richer
26 :     mathematical vocabulary for detecting and delineating image features
27 :     of interest. Matching the increasing flexibility of newer image
28 :     acquisitions, the image values involved may not be grayscale scalar
29 :     values, but multivariate vector or tensor values.
30 :     As we illustrate
31 :     below, with the abstraction of the continuous image and its derived
32 :     attributes in place, it is straightforward to describe a variety of
33 :     analysis algorithms in terms of potentially parallel threads of
34 :     computation. Based on our previous experience, our proposed initial
35 :     work focuses on direct volume rendering, fiber tractography, and
36 :     particle-based feature sampling.
37 :    
38 :     \subsection{Continuous reconstruction and derived attributes}
39 :    
40 :     At the scale of current imaging, the physical world is continuous, so
41 :     it is appropriate to build analysis methods upon abstractions that
42 :     represent the image as a continuous domain. The boundaries and
43 :     structures in the scanned object are generally not aligned in
44 :     any consistent way with the image sampling grid, so visualization and
45 :     analysis methods tend to work in the continuous \emph{world space}
46 :     associated with the scanned object, rather than the discrete {\em
47 :     index space} associated with the raster ordering of the image
48 :     samples. A rich literature on signal processing provides us with a
49 :     machinery for reconstructing continuous domains from discrete data via
50 :     convolution~\cite{GLK:unser00,GLK:Meijering2002}.
51 :     Rather than inventing
52 :     information where it is not known, established methods for
53 :     convolution-based reconstruction start with the recognition that some
54 :     amount of blurring is intrinsic to any image measurement process: each
55 :     sampled value records an integration over space as governed by a
56 :     \emph{point-spread function}~\cite{GLK:Gonzalez2002}. Subsequent convolution-based
57 :     reconstruction can use the same point-spread function or some
58 :     approximation of it to recover the continuous field from which the
59 :     discrete samples were drawn.
60 :     %
61 :     %The definition of convolution-based reconstruction starts with
62 :     %representing the sequence of sampled image values $v[i]$,
63 :     %defined for some range of integral values $i$, in a
64 :     %one-dimensional continuous domain
65 :     %as $v(x) = \sum_i v[i] \delta(x-i)$, where $\delta(x)$
66 :     %is the Dirac delta function.
67 :     %
68 :     The convolution of one-dimensional discrete data $v[i]$ with
69 :     the continuous reconstruction kernel $h(x)$ is defined as~\cite{GLK:Gonzalez2002}
70 :     \begin{equation*}
71 :     (v * h)(x) = \int v(\xi) h(x - \xi) d\xi % _{-\infty}^{+\infty}
72 :     = \int \sum_i v[i] \delta(\xi-i) h(x - \xi) d\xi % _{-\infty}^{+\infty}
73 :     = \sum_i v[i] h(x - i) ,
74 :     \end{equation*}
75 :     % -------------------
76 :     \begin{figure}[t]
77 :     \vspace*{-4em}
78 :     \begin{center}
79 :     \begin{tabular}{ccccc}
80 :     \includegraphics[width=0.28\hackwidth]{pictures/convo/data}
81 :     &
82 :     \raisebox{0.072\hackwidth}{\scalebox{1.2}{*}}
83 :     &
84 :     \includegraphics[width=0.28\hackwidth]{pictures/convo/kern0-bspl}
85 :     &
86 :     \raisebox{0.08\hackwidth}{\scalebox{1.2}{=}}
87 :     &
88 :     \includegraphics[width=0.28\hackwidth]{pictures/convo/data0-bspl} \\
89 :     $v[i]$
90 :     &&
91 :     $h(x)$
92 :     &&
93 :     $f(x) = (v * h)(x)$
94 :     \end{tabular}
95 :     \end{center}%
96 :     \caption{Reconstructing a continuous $f(x)$ by
97 :     convolving discrete data $v[i]$ with kernel $h(x)$}
98 :     \label{fig:convo-demo}
99 :     \end{figure}%
100 :     % -------------------
101 :     which is illustrated in Figure~\ref{fig:convo-demo}.
102 :     In practice, the bounds of the summation will be limited by the
103 :     \emph{support} of $h(x)$: the range of positions for which $h(x)$
104 :     is nonzero.
105 :     % The result is the same as adding together copies
106 :     %of $h(-x)$ located at the integers $i$, and scaled by $v[i]$.
107 :     Using {\em separable} convolution, a one-dimensional reconstruction kernel $h(x)$ generates a
108 :     three-dimensional reconstruction kernel $h(x,y,z) = h(x)h(y)h(z)$.
109 :     The three-dimensional separable convolution sums over three image indices
110 :     of the sampled data $v[i,j,k]$:
111 :     \begin{eqnarray*}
112 :     (v * h)(x,y,z) &=& \sum_{i,j,k} v[i,j,k] h(x - i) h(y - j) h(z - k) \; .
113 :     \end{eqnarray*}%
114 :    
115 :     Many traditional image-analysis methods rely on measurements of the
116 :     local rates of changes in the image, commonly quantified with the
117 :     first derivative or \emph{gradient} and the second derivative or {\em
118 :     Hessian}, which are vector and second-order tensor quantities,
119 :     respectively~\cite{GLK:Marsden1996}. Our work focuses on three
120 :     dimension image domains, in which the gradient is a represented as a
121 :     3-vector, and the Hessian as a $3 \times 3$ symmetric matrix.
122 :     \begin{equation*}
123 :     \nabla f = \begin{bmatrix}
124 :     \frac{\partial f}{\partial x} \\
125 :     \frac{\partial f}{\partial y} \\
126 :     \frac{\partial f}{\partial z}
127 :     \end{bmatrix} ; \,
128 :     \mathbf{H} f = \begin{bmatrix}
129 :     \frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} & \frac{\partial^2 f}{\partial x \partial z} \\
130 :     & \frac{\partial^2 f}{\partial y^2} & \frac{\partial^2 f}{\partial y \partial z} \\
131 :     (sym) & & \frac{\partial^2 f}{\partial z^2}
132 :     \end{bmatrix}
133 :     \end{equation*}%
134 :     The gradient plays a basic role in many edge detection algorithms, and the
135 :     Hessian figures in ridge and valley detection. The constituent
136 :     partial derivatives are measured by convolving with the derivatives of a reconstruction kernel, due to the
137 :     linearity of differentiation and convolution:
138 :     \begin{eqnarray*}
139 :     \frac{\partial(v * h)(x,y,z)}{\partial x} &=& \sum_{i,j,k} v[i,j,k] h'(x - i) h(y - j) h(z - k) \\
140 :     \frac{\partial^2(v * h)(x,y,z)}{\partial x \partial y} &=& \sum_{i,j,k} v[i,j,k] h'(x - i) h'(y - j) h(z - k) .
141 :     \end{eqnarray*}%
142 :    
143 :     %\begin{equation*}
144 :     %\left. \frac{d(v * h)(x)}{dx} \right|_{x = x_0}
145 :     %= \left. \frac{d \sum_i v[i] h(x - i)}{dx}\right|_{x = x_0}
146 :     %= \sum_i v[i] h'(x_0 - i)
147 :     %= (v * h')(x_0) \; .
148 :     %\end{equation*}%
149 :    
150 :     Finally, while the description above has focused in scalar-valued (grayscale)
151 :     images, much of modern imaging is producing more complex and
152 :     multi-variate images, such as vectors from particle image
153 :     velocimetry~\cite{GLK:Raffel2002} and diffusion tensors estimated from
154 :     diffusion-weighted MRI~\cite{GLK:Basser1994a}. In these situations,
155 :     the analysis algorithms may directly use the multi-variate data
156 :     values, but it is common to access the structure in the images through
157 :     some scalar quantity computed from the data, which effectively
158 :     condenses or simplifies the information.
159 :     In a vector field, one may
160 :     need only the vector magnitude (the speed) for finding features of
161 :     interest, but in a tensor field, various scalar-valued tensor invariants may
162 :     play a role in the analysis, such as the trace $\mathrm{tr}(\mathbf{D})$
163 :     or the determinant $\det(\mathbf{D})$
164 :     of a tensor $\mathbf{D}$.
165 :     Diffusion tensor analysis typically
166 :     involves a particular measure of the directional dependence of diffusivity,
167 :     termed Fractional Anisotropy (FA), which is defined as
168 :     \begin{equation*}
169 :     \mathrm{FA}(\mathbf{F})(\mathbf{x}) =
170 :     \sqrt{\frac{3 \, \mathrm{tr}(D D^T)}{\mathrm{tr}(\mathbf{D} \mathbf{D}^T)}} \in \mathbb{R}
171 :     \end{equation*}%
172 :     where $\mathbf{D} = \mathbf{F}(\mathbf{x})$ and $D = \mathbf{D} - \langle \mathbf{D} \rangle \mathbf{I} $.
173 :     Note that both vector magnitude $|\mathbf{v}|$ and tensor
174 :     fractional anisotropy $\mathrm{FA}(\mathbf{D})$ are non-linear
175 :     (\ie{}, $|\mathbf{v} + \mathbf{u}| \neq |\mathbf{v}| + |\mathbf{u}|$
176 :     and $\mathrm{FA}(\mathbf{D} + \mathbf{E}) \neq \mathrm{FA}(\mathbf{D}) + \mathrm{FA}(\mathbf{E})$),
177 :     which means that their spatial derivatives
178 :     are non-trivial functions of the derivatives of the image values.
179 :     The chain rule must be applied to determine the exact formulae,
180 :     and computing these correctly is one challenge in implementing image-analysis
181 :     methods that operate on vector or tensor fields~\cite{GLK:Kindlmann2007}.
182 :     Having language-level support for evaluating these functions and their
183 :     spatial derivatives would simplify the implementation of analysis algorithms
184 :     on multi-variate images.
185 :    
186 :     \subsection{Applications}
187 :     % isosurface
188 :     % creases (ridges & valleys)
189 :     % edges
190 :     % an overview of the computational model of image analysis
191 :     % local vs. global
192 :    
193 :     The initial application of our domain-specific language focuses on
194 :     research areas in scientific visualization and image analysis, with
195 :     which we have previous experience.
196 :     In this section, we describe three of these types of algorithms.
197 :     Volume rendering is included as a simple visualization method that
198 :     showcases convolution-based measurements of values and
199 :     derived attributes, a computational step that appears in the
200 :     inner loop of the other two algorithms.
201 :     Fiber tractography uses pointwise estimates
202 :     of axonal direction to trace paths of
203 :     possible neural connectivity in diffusion tensor fields.
204 :     Particle-based methods use a combination of energy terms and
205 :     feature constraints to compute uniform samplings of continuous
206 :     image features.
207 :    
208 :     \subsection{Direct volume rendering}
209 :     \label{sec:dvr}
210 :    
211 :     Direct volume rendering is a standard method for visually indicating
212 :     the over-all structure in a volume dataset with a rendered image~\cite{GLK:drebin88,GLK:levoy88}.
213 :     It sidesteps explicitly computing
214 :     geometric (polygonal) models of features in the data, in contrast
215 :     to methods like Marching Cubes that create a triangular
216 :     mesh of an isosurface~\cite{GLK:Lorensen1987}.
217 :     Volume rendering maps more
218 :     directly from the image samples to the rendered image, by assigning
219 :     colors and opacities to the data values via user-specified {\em transfer function}.
220 :     %
221 :     \begin{algorithm}
222 :     \caption{Direct volume render continuous field $V$, which maps
223 :     from position $\mathbf{x}$ to data values $V(\mathbf{x})$,
224 :     with transfer function $F$, which maps from data values to color and opacity.}
225 :     \label{alg:volrend}
226 :     \begin{algorithmic}
227 :     \FOR{samples $(i,j)$ \textbf{in} rendered image $m$}
228 :     \STATE $\mathbf{x}_0 \leftarrow$ origin of ray through $m[i,j]$
229 :     \STATE $\mathbf{r} \leftarrow $ unit-length direction origin of ray through $m[i,j]$
230 :     \STATE $t \leftarrow 0$ \COMMENT{initialize ray parametrization}
231 :     \STATE $\mathbf{C} \leftarrow \mathbf{C}_0$ \COMMENT{initialize color $\mathbf{C}$}
232 :     \WHILE{$\mathbf{x}_0 + t \mathbf{r}$ within far clipping plane}
233 :     \STATE $\mathbf{v} \leftarrow V(\mathbf{x}_0 + t \mathbf{r})$ \COMMENT{learn values and derived attributes at current ray position}
234 :     \STATE $\mathbf{C} \leftarrow \mathrm{composite}(\mathbf{C}, F(\mathbf{v}))$ \COMMENT{apply transfer function and accumulate color}
235 :     \STATE $t \leftarrow t + t_{step}$
236 :     \ENDWHILE
237 :     \STATE $m[i,j] \leftarrow \mathbf{C}$
238 :     \ENDFOR
239 :     \end{algorithmic}
240 :     \end{algorithm}%
241 :     %
242 :     Algorithm~\ref{alg:volrend} gives pseudocode for a basic volume
243 :     renderer. The inner loop depends on learning (via
244 :     convolution-based reconstruction) data values and
245 :     attributes in volume $V$ at position $\mathbf{x}_0 + t \mathbf{r}$. These are
246 :     then mapped through the transfer function $F$, which determines
247 :     the rendered appearance of the data values.
248 :    
249 :     GPU-based methods for direct volume rendering are increasingly
250 :     common~\cite{GLK:Ikits2005}. The implementations typically involve
251 :     hand-coded GPU routines designed around particular hardware
252 :     architectures, with an eye towards performance rather than
253 :     flexibility. Although the outcome of our research may not out-perform
254 :     hand-coded volume-rendering implementations, we plan to use volume
255 :     rendering as an informative and self-contained test-case for
256 :     expressing high-quality convolution-based reconstruction in a
257 :     domain-specific language.
258 :    
259 :     \subsection{Fiber tractography}
260 :     \label{sec:fibers}
261 :    
262 :     Diffusion tensor magnetic resonance imaging (DT-MRI or DTI) uses a
263 :     tensor model of diffusion-weighted MRI to describe the
264 :     directional structure of tissue \emph{in vivo}~\cite{GLK:Basser1994}.
265 :     %
266 :     The coherent organization of axons in the white matter of the brain
267 :     tissue shapes the pattern of diffusion of water within
268 :     it~\cite{GLK:Pierpaoli1996}. The directional dependence of the
269 :     apparent diffusion coefficient is termed \emph{anisotropy}, and
270 :     quantitative measurements of anisotropy and its orientation have
271 :     enabled research on white matter organization in health and
272 :     disease~\cite{GLK:Bihan2003}.
273 :     %
274 :     Empirical evidence has shown that in much of the white matter, the
275 :     principal eigenvector $\mathbf{e}_1$ of the diffusion tensor $\mathbf{D}$
276 :     indicates the local direction of axon bundles~\cite{GLK:Beaulieu2002}.
277 :     %
278 :     Using integrals of the principal
279 :     eigenvector, we can compute paths through a diffusion tensor field that approximate axonal connectivity~\cite{GLK:conturo99,GLK:BasserMRM00}.
280 :     %
281 :     The path integration algorithm, termed \emph{tractography}, is shown
282 :     %
283 :     \begin{algorithm}
284 :     \caption{$\mathrm{tract}(\mathbf{p})$: integrate path everywhere tangent to
285 :     principal eigenvector of tensor field $\mathbf{D}(\mathbf{x})$ starting
286 :     at seed point $\mathbf{p}$}
287 :     \label{alg:tracto}
288 :     \begin{algorithmic}
289 :     \STATE $T_- \leftarrow [] ;\, T_+ \leftarrow []$ \COMMENT{initialize forward and backward halves of path}
290 :     \FOR{$d \in \{+1, -1\}$}
291 :     \STATE $\mathbf{x} \leftarrow \mathbf{p}$
292 :     \STATE $\mathbf{v} \leftarrow d \mathbf{e}_1(\mathbf{D}(\mathbf{p}))$ \COMMENT{use $d$ to determine starting direction from $\mathbf{p}$}
293 :     \REPEAT
294 :     \STATE $\mathbf{x} \leftarrow \mathbf{x} + s \mathbf{v}$ \COMMENT{Euler integration step}
295 :     \STATE $\mathbf{u} \leftarrow \mathbf{e}_1(\mathbf{D}(\mathbf{x}))$
296 :     \IF {$\mathbf{u} \cdot \mathbf{v} < 0$}
297 :     \STATE $\mathbf{u} \leftarrow -\mathbf{u}$ \COMMENT{fix direction at new position to align with incoming direction}
298 :     \ENDIF
299 :     \STATE $\mathrm{append}(T_d, \mathbf{x})$
300 :     \STATE $\mathbf{v} \leftarrow \mathbf{u}$ \COMMENT{save new position to path so far}
301 :     \UNTIL {$\mathrm{CL}(\mathbf{D}(\mathbf{x})) < \mathrm{CL}_{\mathrm{min}}$} \COMMENT{anisotropy measure CL quantifies numerical certainty in $\mathbf{e}_1$}
302 :     \ENDFOR
303 :     \RETURN $\mathrm{concat}(\mathrm{reverse}(T_-), [\mathbf{p}], T_+)$
304 :     \end{algorithmic}
305 :     \end{algorithm}
306 :     %
307 :     in Algorithm~\ref{alg:tracto}. Euler integration is used here for
308 :     illustration, but Runge-Kutta integration is also common~\cite{GLK:NR}.
309 :     The termination condition for tractography is when the numerical
310 :     certainty of the principle eigenvector $\mathbf{e}_1$ is too low to
311 :     warrant further integration. The CL anisotropy measure is, like FA, a
312 :     scalar-valued tensor invariant~\cite{GLK:Westin2002}. The result of
313 :     tractography is a list of vertex locations along a polyline that
314 :     roughly indicates the course of axon bundles through the brain.
315 :    
316 :     \subsection{Particle systems}
317 :     \label{sec:particles}
318 :    
319 :     Some tasks can be naturally decomposed into computational units, which
320 :     can be termed \emph{particles}, that individually implement some
321 :     uniform behavior with respect to their environment so that their
322 :     iterated collective action represents the numerical solution to an
323 :     optimization or simulation. Such particle systems have a long history
324 :     in computer graphics for, among other applications, the simulation and
325 :     rendering of natural phenomenon~\cite{GLK:Reeves1983}, the organic
326 :     generation of surface textures~\cite{GLK:Turk1991}, and the
327 :     interactive sculpting and manipulation of surfaces in
328 :     three-dimensions~\cite{GLK:Szeliski1992,GLK:Witkin1994}. Other recent
329 :     work, described in more detail below, solves biomedical image-analysis
330 :     tasks with data-driven particle systems, indicating their promise for
331 :     a broader range of scientific applications. Particle systems
332 :     represent a challenging but rewarding target for our proposed
333 :     domain-specific language. While the rays and paths of volume rendering
334 :     (\secref{sec:dvr}) and fiber tractography
335 :     (\secref{sec:fibers}) do not interact with each other and thus
336 :     can easily be computed in parallel, particle systems do require
337 :     inter-particle interactions, which constrains the structure of
338 :     parallel implementations.
339 :    
340 :     Particle systems recently developed for image analysis are designed to
341 :     compute a uniform sampling of the spatial extent of an image feature,
342 :     where the feature is defined as the set of locations satisfying some
343 :     equation involving locally measured image attributes. For example,
344 :     the isosurface $S_i$ (or implicit surface) of a scalar field
345 :     $f(\mathbf{x})$ at isovalue $v$ is defined by $S_v = \{\mathbf{x} |
346 :     f(\mathbf{x}) = v\}$. Particle systems for isosurface features
347 :     have been studied in
348 :     visualization~\cite{GLK:Crossno1997,GLK:Meyer2005,GLK:Meyer2007a}, and
349 :     have been adapted in medical image analysis for group studies of shape
350 :     variation of pre-segmented objects~\cite{GLK:Cates2006,GLK:Cates2007}.
351 :     In this approach, the particle system is a combination of two components.
352 :     First, particles are constrained to the isosurface by application
353 :     of Algorithm~\ref{alg:iso-newton},
354 :     %
355 :     \begin{algorithm}
356 :     \caption{$\mathrm{constr}_v(\mathbf{x})$: move position $\mathbf{x}$ onto isosurface
357 :     $S_v = \{\mathbf{x} | f(\mathbf{x}) = v_0\}$}
358 :     \label{alg:iso-newton}
359 :     \begin{algorithmic}
360 :     \REPEAT
361 :     \STATE $(v,\mathbf{g}) \leftarrow (f(\mathbf{x}) - v_0,\nabla f(\mathbf{x}))$
362 :     \STATE $\mathbf{s} \leftarrow v \frac{\mathbf{g}}{\mathbf{g}\cdot\mathbf{g}}$ \COMMENT{Newton-Raphson step}
363 :     \STATE $\mathbf{x} \leftarrow \mathbf{x} - \mathbf{s}$
364 :     \UNTIL{$|\mathbf{s}| < \epsilon$}
365 :     \RETURN $\mathbf{x}$
366 :     \end{algorithmic}
367 :     \end{algorithm}
368 :     %
369 :     a numerical search computed independently for each particle. Second, particles
370 :     are endowed with a radially decreasing potential energy function $\phi(r)$, so that
371 :     the minimum energy configuration is a spatially uniform sampling of the surface
372 :     (according to Algorithm~\ref{alg:particles} below).
373 :    
374 :     Our recent work has extended this methodology to a larger set of image
375 :     features, ridges and valleys (both surfaces and lines), in order to detect and sample image
376 :     features in unsegmented data, which is more computationally demanding
377 :     because of the feature definition~\cite{GLK:Kindlmann2009}. Ridges
378 :     and valleys of the scalar field $f(\mathbf{x})$ are defined in terms of
379 :     the gradient $\nabla f$ and the eigenvectors $\{\mathbf{e}_1,
380 :     \mathbf{e}_2, \mathbf{e}_3\}$ of the Hessian $\mathbf{H} f$ with
381 :     corresponding eigenvalues $\lambda_1 \geq \lambda_2 \geq \lambda_3$
382 :     ($\mathbf{H} \mathbf{e}_i = \lambda_i
383 :     \mathbf{e}_i$)~\cite{GLK:Eberly1996}. A ridge surface $S_r$, for
384 :     example, is defined in terms of the gradient and the minor eigenvector
385 :     $\mathbf{e}_3$ of the Hessian: $S_r = \{\mathbf{x} | \mathbf{e}_3(\mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0 \}$.
386 :     Intuitively $S_r$ is the set of locations where motion
387 :     along $\mathbf{e}_3$ can not increase the height further.
388 :     %
389 :     \begin{algorithm}
390 :     \caption{$\mathrm{constr}_r(\mathbf{x})$: move position $\mathbf{x}$ onto ridge surface
391 :     $S_r = \{\mathbf{x} | \mathbf{e}_3(\mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0 \}$}
392 :     \label{alg:sridge}
393 :     \begin{algorithmic}
394 :     \REPEAT
395 :     \STATE $(\mathbf{g},\mathbf{H}) \leftarrow (\nabla f(\mathbf{x}),
396 :     \mathbf{H} f(\mathbf{x}))$
397 :     % \STATE $(\{\lambda_1,\lambda_2,\lambda_3\},
398 :     % \{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}) = \mathrm{eigensolve}(\mathbf{H})$
399 :     \STATE $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\} = \mathrm{eigenvectors}(\mathbf{H})$
400 :     \STATE $\mathbf{P} = \mathbf{e}_3 \otimes \mathbf{e}_3$ \COMMENT{tangent to allowed motion}
401 :     \STATE $\mathbf{n} = \mathbf{P} \mathbf{g} / |\mathbf{P} \mathbf{g}|$ \COMMENT{unit-length direction of constrained gradient ascent}
402 :     \STATE $(f',f'') = (\mathbf{g} \cdot \mathbf{n}, \mathbf{n} \cdot \mathbf{H} \mathbf{n})$ \COMMENT{directional derivatives of $f$ along $\mathbf{n}$}
403 :     \IF {$f'' \geq 0$}
404 :     \STATE $s \leftarrow s_0 |\mathbf{P} \mathbf{g}|$ \COMMENT{regular gradient ascent if concave-up}
405 :     \ELSE
406 :     \STATE $s \leftarrow -f'/f''$ \COMMENT{aim for top of parabola (2nd-order fit) if concave-down}
407 :     \ENDIF
408 :     \STATE $\mathbf{x} \leftarrow \mathbf{x} + s \mathbf{n}$
409 :     \UNTIL{$s < \epsilon$}
410 :     \RETURN $\mathbf{x}$
411 :     \end{algorithmic}
412 :     \end{algorithm}
413 :     %
414 :     The numerical search in Algorithm~\ref{alg:sridge} to locate the ridge surface
415 :     is more complex than Algorithm~\ref{alg:iso-newton} for isosurfaces, but it
416 :     plays the same role in the overall particle system.
417 :    
418 :     After the definition of the feature constraint, the other major piece
419 :     in the system is the per-particle energy function $\phi(r)$, which
420 :     completely determines the particle system energy.
421 :     $\phi(r)$ is maximal at $r=0$ and decreases to $\phi(r)=0$ beyond an $r_{\mathrm{max}}$,
422 :     determining the radius of influence of a single particle. Particles move
423 :     in the system only to minimize the energy due to their neighbors.
424 :     %
425 :     %\begin{algorithm}
426 :     %\caption{$\mathrm{push}(\mathbf{p}, \mathcal{R})$: Sum energy $e$ and force $\mathbf{f}$
427 :     %at $\mathbf{p}$ from other particles in set $\mathcal{R}$ due to potential function $\phi(r)$}
428 :     %\label{alg:push}
429 :     %\begin{algorithmic}
430 :     %\STATE $(e,\mathbf{f}) \leftarrow (0,\mathbf{0})$
431 :     %\FOR{ $\mathbf{r}$ \textbf{in} $\mathcal{R}$ }
432 :     % \STATE $\mathbf{d} \leftarrow \mathbf{p} - \mathbf{r}$
433 :     % \STATE $e \leftarrow e + \phi(|\mathbf{d}|)$
434 :     % \STATE $\mathbf{f} \leftarrow \mathbf{f} - \phi'(|\mathbf{d}|)\frac{\mathbf{d}}{|\mathbf{d}|}$ \COMMENT{force is negative spatial gradient of energy}
435 :     %\ENDFOR
436 :     %\RETURN $(e,\mathbf{f})$
437 :     %\end{algorithmic}
438 :     %\end{algorithm}
439 :     %
440 :     \begin{algorithm}
441 :     \caption{Subject to constraint feature $\mathrm{constr}$,
442 :     move particle set $\mathcal{P}$ to energy minima, given
443 :     potential function $\phi(r)$, $\phi(r) = 0$ for $r > r_{\mathrm{max}}$}
444 :     \label{alg:particles}
445 :     \begin{algorithmic}
446 :     \REPEAT
447 :     \STATE $E \leftarrow 0$ \COMMENT{total system energy sum}
448 :     \FOR{ $\mathbf{p}_i \in \mathcal{P}$}
449 :     \STATE $\mathcal{R} \leftarrow \mathrm{neighb}(\mathbf{p}_i)$ \COMMENT{$\mathcal{R} \subset \mathcal{P}$ is all particles within $r_{\mathrm{max}}$ of $\mathbf{p}_i$}
450 :     \STATE $e \leftarrow \sum_{\mathbf{r} \in \mathcal{R}}{\phi(|\mathbf{p} - \mathbf{r}|)}$ \COMMENT{sum energy contributions from neighbors}
451 :     \STATE $\mathbf{f} \leftarrow - \sum_{\mathbf{r} \in \mathcal{R}}{\phi'(|\mathbf{p} - \mathbf{r}|)\frac{\mathbf{p} - \mathbf{r}}{|\mathbf{p} - \mathbf{r}|}}$ \COMMENT{force is negative spatial gradient of energy}
452 :     \REPEAT
453 :     \STATE $\mathbf{p}' \leftarrow \mathbf{p}_i + s_i \mathbf{f}$ \COMMENT{particle $\mathbf{p}_i$ maintains a gradient descent step size $s_i$}
454 :     \STATE $\mathbf{p}' \leftarrow \mathrm{constr}(\mathbf{p}')$ \COMMENT{re-apply constraint for every test step}
455 :     \STATE $e' \leftarrow \sum_{\mathbf{r} \in \mathrm{neighb}(\mathbf{p}')}{\phi(|\mathbf{p}' - \mathbf{r}|)}$
456 :     \IF {$e' > e$}
457 :     \STATE $s_i \leftarrow s_i / 5$ \COMMENT{decrease step size if energy increased}
458 :     \ENDIF
459 :     \UNTIL {$e' < e$}
460 :     \STATE $\mathbf{p}_i \leftarrow \mathbf{p}'$ \COMMENT{save new position at lower energy}
461 :     \STATE $E \leftarrow E + e'$
462 :     \ENDFOR
463 :     \UNTIL {$\Delta E < \epsilon$}
464 :     \end{algorithmic}
465 :     \end{algorithm}
466 :     %
467 :     Each iteration of Algorithm~\ref{alg:particles} moves particles individually
468 :     (and asynchronously) to lower their own energy, ensuring
469 :     that the total system energy is consistently reduced, until the system
470 :     reaches a local minimum in its energy configuration. Experience has shown that
471 :     the local minima found by this gradient descent approach are sufficiently
472 :     optimal for the intended applications of the system.
473 :     %
474 :     \begin{figure}[ht]
475 :     %\def\subfigcapskip{-1pt} % between subfig and its own caption
476 :     %\def\subfigtopskip{4pt} % between subfig caption and next subfig
477 :     %\def\subfigbottomskip{-4pt} % also between subfig caption and next subfig
478 :     %\def\subfigcapmargin{0pt} % margin on subfig caption
479 :     \centering{
480 :     \subfigure[Lung airway segmentation]{
481 :     \includegraphics[width=0.264\columnwidth]{pictures/lung-ccpick-iso}
482 :     \label{fig:lung}
483 :     }
484 :     \hspace{0.02\columnwidth}
485 :     \subfigure[Brain white matter skeleton]{
486 :     \includegraphics[width=0.6292\columnwidth]{pictures/wb}
487 :     \label{fig:brain}
488 :     }
489 :     }
490 :     \caption{Example results from particle systems in lung CT (a) and
491 :     brain DTI (b).}
492 :     \label{fig:lung-brain}
493 :     \end{figure}
494 :     %
495 :     Figure~\ref{fig:lung-brain} shows some results with our current particle system~\cite{GLK:Kindlmann2009}.
496 :     The valley lines of a CT scan, sampled and displayed in Fig.~\ref{fig:lung}
497 :     successfully capture the branching airways of the lung (as compared to a previous
498 :     segmentation method shown with a white semi-transparent surface). The ridge surfaces
499 :     of FA in a brain DTI scan, displayed in Fig.~\ref{fig:brain}, capture the
500 :     major pieces of white matter anatomy, and can serve as a structural skeleton for white matter analysis.
501 :     Further improvement of the method is currently hindered by the complexity of its C implementation (in Teem~\cite{teem}),
502 :     and the long execution times on a single CPU.
503 :    
504 :     \subsection{Future analysis tasks}
505 :     \label{sec:future}
506 :    
507 :     Image analysis remains an active research area, and many experimental
508 :     algorithms diverge from those shown above. Many research methods,
509 :     however, conform to or build upon the basic structure of the
510 :     algorithms show above, and our proposed language will facilitate
511 :     their implementation and experimental application. For example,
512 :     the fiber tractography in \secref{sec:fibers} computed
513 :     paths through a pre-computed field of diffusion tensors, which
514 :     were estimated per-sample from a larger collection of diffusion-weighted
515 :     magnetic resonance images (DW-MRI).
516 :     State-of-the-art tractography algorithms interpolate the DW-MRI values,
517 :     and perform the per-point model-fitting and fiber orientation computations inside the
518 :     path integration~\cite{GLK:Schultz2008,GLK:Qazi2009}. However, these advanced methods
519 :     can also be expressed as a convolution-based reconstruction followed by
520 :     computation of non-linear derived attributes, so the basic structure of the
521 :     tractography Algorithm~\ref{alg:tracto} is preserved. Being able to
522 :     rapidly create new tractography methods with different models for diffusion
523 :     and fiber orientation would accelerate research in determining brain
524 :     structure from diffusion imaging.
525 :    
526 :     Many other types of image features can conceivably be recovered with
527 :     the types of particle systems described in Section~\ref{sec:particles}
528 :     and Algorithm~\ref{alg:particles}. Essentially, implementing the
529 :     routine for constraining particles to stay within a new feature type
530 :     (analogous to Algorithms~\ref{alg:iso-newton} and~\ref{alg:sridge} for
531 :     isosurfaces and ridges) is the hard part of creating a particle system
532 :     to detect and sample the feature. One classical type of image feature
533 :     is Canny edges, which are essentially ridge surfaces of gradient
534 :     magnitude~\cite{GLK:Canny1986}. Although these are well-understood
535 :     for two-dimensional images, they have been applied less frequently in
536 :     three-dimensional medical images, and very rarely to derived
537 :     attributes of multi-variate data. One could imagine, for example, a
538 :     particle system that computes a brain white matter segmentation by finding Canny
539 :     edges of the fractional anisotropy (FA) of a diffusion tensor field.
540 :     The variety of attributes (and their spatial derivatives) available in
541 :     modern multi-variate medical images suggests that there are many
542 :     analysis algorithms that have yet to be explored because of the
543 :     daunting implementation complexity and computation cost.

root@smlnj-gforge.cs.uchicago.edu
ViewVC Help
Powered by ViewVC 1.0.0