%!TEX root = paper.tex
%
\section{Image analysis}
\label{sec:image}
% provide rationale for why the language looks the way it does
% e.g. why do we fundmentally need derivatives
% talk about general structure of these algorithms
An overall goal in image-analysis methods is to correlate the
physical, biological, and anatomical structure of the objects being
scanned with the mathematical structure that is accessible in the
digital image.
Quantitative analysis of the scanned objects is computed
from their measured image representations.
We propose to simplify the
work of developing, implementing, and applying image-analysis
algorithms by including fundamental image operations directly in a
domain-specific language. One elementary operation in a variety of
analysis algorithms is convolution-based reconstruction of a
continuous image domain from the discretely sampled values, which
usefully mimics the underlying continuity of the measured space. A
related operation is the point-wise measurement of derivatives and
other locally-determined image attributes, which can support a richer
mathematical vocabulary for detecting and delineating image features
of interest. Matching the increasing flexibility of newer image
acquisitions, the image values involved may not be grayscale scalar
values, but multivariate vector or tensor values.
As we illustrate
below, with the abstraction of the continuous image and its derived
attributes in place, it is straightforward to describe a variety of
analysis algorithms in terms of potentially parallel threads of
computation. Based on our previous experience, our proposed initial
work focuses on direct volume rendering, fiber tractography, and
particle-based feature sampling.
\subsection{Continuous reconstruction and derived attributes}
At the scale of current imaging, the physical world is continuous, so
it is appropriate to build analysis methods upon abstractions that
represent the image as a continuous domain. The boundaries and
structures in the scanned object are generally not aligned in
any consistent way with the image sampling grid, so visualization and
analysis methods tend to work in the continuous \emph{world space}
associated with the scanned object, rather than the discrete {\em
index space} associated with the raster ordering of the image
samples. A rich literature on signal processing provides us with a
machinery for reconstructing continuous domains from discrete data via
convolution~\cite{GLK:unser00,GLK:Meijering2002}.
Rather than inventing
information where it is not known, established methods for
convolution-based reconstruction start with the recognition that some
amount of blurring is intrinsic to any image measurement process: each
sampled value records an integration over space as governed by a
\emph{point-spread function}~\cite{GLK:Gonzalez2002}. Subsequent convolution-based
reconstruction can use the same point-spread function or some
approximation of it to recover the continuous field from which the
discrete samples were drawn.
%
%The definition of convolution-based reconstruction starts with
%representing the sequence of sampled image values $v[i]$,
%defined for some range of integral values $i$, in a
%one-dimensional continuous domain
%as $v(x) = \sum_i v[i] \delta(x-i)$, where $\delta(x)$
%is the Dirac delta function.
%
The convolution of one-dimensional discrete data $v[i]$ with
the continuous reconstruction kernel $h(x)$ is defined as~\cite{GLK:Gonzalez2002}
\begin{equation*}
(v * h)(x) = \int v(\xi) h(x - \xi) d\xi % _{-\infty}^{+\infty}
= \int \sum_i v[i] \delta(\xi-i) h(x - \xi) d\xi % _{-\infty}^{+\infty}
= \sum_i v[i] h(x - i) ,
\end{equation*}
% -------------------
\begin{figure}[t]
\vspace*{-4em}
\begin{center}
\begin{tabular}{ccccc}
\includegraphics[width=0.28\hackwidth]{pictures/convo/data}
&
\raisebox{0.072\hackwidth}{\scalebox{1.2}{*}}
&
\includegraphics[width=0.28\hackwidth]{pictures/convo/kern0-bspl}
&
\raisebox{0.08\hackwidth}{\scalebox{1.2}{=}}
&
\includegraphics[width=0.28\hackwidth]{pictures/convo/data0-bspl} \\
$v[i]$
&&
$h(x)$
&&
$f(x) = (v * h)(x)$
\end{tabular}
\end{center}%
\caption{Reconstructing a continuous $f(x)$ by
convolving discrete data $v[i]$ with kernel $h(x)$}
\label{fig:convo-demo}
\end{figure}%
% -------------------
which is illustrated in Figure~\ref{fig:convo-demo}.
In practice, the bounds of the summation will be limited by the
\emph{support} of $h(x)$: the range of positions for which $h(x)$
is nonzero.
% The result is the same as adding together copies
%of $h(-x)$ located at the integers $i$, and scaled by $v[i]$.
Using {\em separable} convolution, a one-dimensional reconstruction kernel $h(x)$ generates a
three-dimensional reconstruction kernel $h(x,y,z) = h(x)h(y)h(z)$.
The three-dimensional separable convolution sums over three image indices
of the sampled data $v[i,j,k]$:
\begin{eqnarray*}
(v * h)(x,y,z) &=& \sum_{i,j,k} v[i,j,k] h(x - i) h(y - j) h(z - k) \; .
\end{eqnarray*}%
Many traditional image-analysis methods rely on measurements of the
local rates of changes in the image, commonly quantified with the
first derivative or \emph{gradient} and the second derivative or {\em
Hessian}, which are vector and second-order tensor quantities,
respectively~\cite{GLK:Marsden1996}. Our work focuses on three
dimension image domains, in which the gradient is a represented as a
3-vector, and the Hessian as a $3 \times 3$ symmetric matrix.
\begin{equation*}
\nabla f = \begin{bmatrix}
\frac{\partial f}{\partial x} \\
\frac{\partial f}{\partial y} \\
\frac{\partial f}{\partial z}
\end{bmatrix} ; \,
\mathbf{H} f = \begin{bmatrix}
\frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} & \frac{\partial^2 f}{\partial x \partial z} \\
& \frac{\partial^2 f}{\partial y^2} & \frac{\partial^2 f}{\partial y \partial z} \\
(sym) & & \frac{\partial^2 f}{\partial z^2}
\end{bmatrix}
\end{equation*}%
The gradient plays a basic role in many edge detection algorithms, and the
Hessian figures in ridge and valley detection. The constituent
partial derivatives are measured by convolving with the derivatives of a reconstruction kernel, due to the
linearity of differentiation and convolution:
\begin{eqnarray*}
\frac{\partial(v * h)(x,y,z)}{\partial x} &=& \sum_{i,j,k} v[i,j,k] h'(x - i) h(y - j) h(z - k) \\
\frac{\partial^2(v * h)(x,y,z)}{\partial x \partial y} &=& \sum_{i,j,k} v[i,j,k] h'(x - i) h'(y - j) h(z - k) .
\end{eqnarray*}%
%\begin{equation*}
%\left. \frac{d(v * h)(x)}{dx} \right|_{x = x_0}
%= \left. \frac{d \sum_i v[i] h(x - i)}{dx}\right|_{x = x_0}
%= \sum_i v[i] h'(x_0 - i)
%= (v * h')(x_0) \; .
%\end{equation*}%
Finally, while the description above has focused in scalar-valued (grayscale)
images, much of modern imaging is producing more complex and
multi-variate images, such as vectors from particle image
velocimetry~\cite{GLK:Raffel2002} and diffusion tensors estimated from
diffusion-weighted MRI~\cite{GLK:Basser1994a}. In these situations,
the analysis algorithms may directly use the multi-variate data
values, but it is common to access the structure in the images through
some scalar quantity computed from the data, which effectively
condenses or simplifies the information.
In a vector field, one may
need only the vector magnitude (the speed) for finding features of
interest, but in a tensor field, various scalar-valued tensor invariants may
play a role in the analysis, such as the trace $\mathrm{tr}(\mathbf{D})$
or the determinant $\det(\mathbf{D})$
of a tensor $\mathbf{D}$.
Diffusion tensor analysis typically
involves a particular measure of the directional dependence of diffusivity,
termed Fractional Anisotropy (FA), which is defined as
\begin{equation*}
\mathrm{FA}(\mathbf{F})(\mathbf{x}) =
\sqrt{\frac{3 \, \mathrm{tr}(D D^T)}{\mathrm{tr}(\mathbf{D} \mathbf{D}^T)}} \in \mathbb{R}
\end{equation*}%
where $\mathbf{D} = \mathbf{F}(\mathbf{x})$ and $D = \mathbf{D} - \langle \mathbf{D} \rangle \mathbf{I} $.
Note that both vector magnitude $|\mathbf{v}|$ and tensor
fractional anisotropy $\mathrm{FA}(\mathbf{D})$ are non-linear
(\ie{}, $|\mathbf{v} + \mathbf{u}| \neq |\mathbf{v}| + |\mathbf{u}|$
and $\mathrm{FA}(\mathbf{D} + \mathbf{E}) \neq \mathrm{FA}(\mathbf{D}) + \mathrm{FA}(\mathbf{E})$),
which means that their spatial derivatives
are non-trivial functions of the derivatives of the image values.
The chain rule must be applied to determine the exact formulae,
and computing these correctly is one challenge in implementing image-analysis
methods that operate on vector or tensor fields~\cite{GLK:Kindlmann2007}.
Having language-level support for evaluating these functions and their
spatial derivatives would simplify the implementation of analysis algorithms
on multi-variate images.
\subsection{Applications}
% isosurface
% creases (ridges & valleys)
% edges
% an overview of the computational model of image analysis
% local vs. global
The initial application of our domain-specific language focuses on
research areas in scientific visualization and image analysis, with
which we have previous experience.
In this section, we describe three of these types of algorithms.
Volume rendering is included as a simple visualization method that
showcases convolution-based measurements of values and
derived attributes, a computational step that appears in the
inner loop of the other two algorithms.
Fiber tractography uses pointwise estimates
of axonal direction to trace paths of
possible neural connectivity in diffusion tensor fields.
Particle-based methods use a combination of energy terms and
feature constraints to compute uniform samplings of continuous
image features.
\subsection{Direct volume rendering}
\label{sec:dvr}
Direct volume rendering is a standard method for visually indicating
the over-all structure in a volume dataset with a rendered image~\cite{GLK:drebin88,GLK:levoy88}.
It sidesteps explicitly computing
geometric (polygonal) models of features in the data, in contrast
to methods like Marching Cubes that create a triangular
mesh of an isosurface~\cite{GLK:Lorensen1987}.
Volume rendering maps more
directly from the image samples to the rendered image, by assigning
colors and opacities to the data values via user-specified {\em transfer function}.
%
\begin{algorithm}
\caption{Direct volume render continuous field $V$, which maps
from position $\mathbf{x}$ to data values $V(\mathbf{x})$,
with transfer function $F$, which maps from data values to color and opacity.}
\label{alg:volrend}
\begin{algorithmic}
\FOR{samples $(i,j)$ \textbf{in} rendered image $m$}
\STATE $\mathbf{x}_0 \leftarrow$ origin of ray through $m[i,j]$
\STATE $\mathbf{r} \leftarrow $ unit-length direction origin of ray through $m[i,j]$
\STATE $t \leftarrow 0$ \COMMENT{initialize ray parametrization}
\STATE $\mathbf{C} \leftarrow \mathbf{C}_0$ \COMMENT{initialize color $\mathbf{C}$}
\WHILE{$\mathbf{x}_0 + t \mathbf{r}$ within far clipping plane}
\STATE $\mathbf{v} \leftarrow V(\mathbf{x}_0 + t \mathbf{r})$ \COMMENT{learn values and derived attributes at current ray position}
\STATE $\mathbf{C} \leftarrow \mathrm{composite}(\mathbf{C}, F(\mathbf{v}))$ \COMMENT{apply transfer function and accumulate color}
\STATE $t \leftarrow t + t_{step}$
\ENDWHILE
\STATE $m[i,j] \leftarrow \mathbf{C}$
\ENDFOR
\end{algorithmic}
\end{algorithm}%
%
Algorithm~\ref{alg:volrend} gives pseudocode for a basic volume
renderer. The inner loop depends on learning (via
convolution-based reconstruction) data values and
attributes in volume $V$ at position $\mathbf{x}_0 + t \mathbf{r}$. These are
then mapped through the transfer function $F$, which determines
the rendered appearance of the data values.
GPU-based methods for direct volume rendering are increasingly
common~\cite{GLK:Ikits2005}. The implementations typically involve
hand-coded GPU routines designed around particular hardware
architectures, with an eye towards performance rather than
flexibility. Although the outcome of our research may not out-perform
hand-coded volume-rendering implementations, we plan to use volume
rendering as an informative and self-contained test-case for
expressing high-quality convolution-based reconstruction in a
domain-specific language.
\subsection{Fiber tractography}
\label{sec:fibers}
Diffusion tensor magnetic resonance imaging (DT-MRI or DTI) uses a
tensor model of diffusion-weighted MRI to describe the
directional structure of tissue \emph{in vivo}~\cite{GLK:Basser1994}.
%
The coherent organization of axons in the white matter of the brain
tissue shapes the pattern of diffusion of water within
it~\cite{GLK:Pierpaoli1996}. The directional dependence of the
apparent diffusion coefficient is termed \emph{anisotropy}, and
quantitative measurements of anisotropy and its orientation have
enabled research on white matter organization in health and
disease~\cite{GLK:Bihan2003}.
%
Empirical evidence has shown that in much of the white matter, the
principal eigenvector $\mathbf{e}_1$ of the diffusion tensor $\mathbf{D}$
indicates the local direction of axon bundles~\cite{GLK:Beaulieu2002}.
%
Using integrals of the principal
eigenvector, we can compute paths through a diffusion tensor field that approximate axonal connectivity~\cite{GLK:conturo99,GLK:BasserMRM00}.
%
The path integration algorithm, termed \emph{tractography}, is shown
%
\begin{algorithm}
\caption{$\mathrm{tract}(\mathbf{p})$: integrate path everywhere tangent to
principal eigenvector of tensor field $\mathbf{D}(\mathbf{x})$ starting
at seed point $\mathbf{p}$}
\label{alg:tracto}
\begin{algorithmic}
\STATE $T_- \leftarrow [] ;\, T_+ \leftarrow []$ \COMMENT{initialize forward and backward halves of path}
\FOR{$d \in \{+1, -1\}$}
\STATE $\mathbf{x} \leftarrow \mathbf{p}$
\STATE $\mathbf{v} \leftarrow d \mathbf{e}_1(\mathbf{D}(\mathbf{p}))$ \COMMENT{use $d$ to determine starting direction from $\mathbf{p}$}
\REPEAT
\STATE $\mathbf{x} \leftarrow \mathbf{x} + s \mathbf{v}$ \COMMENT{Euler integration step}
\STATE $\mathbf{u} \leftarrow \mathbf{e}_1(\mathbf{D}(\mathbf{x}))$
\IF {$\mathbf{u} \cdot \mathbf{v} < 0$}
\STATE $\mathbf{u} \leftarrow -\mathbf{u}$ \COMMENT{fix direction at new position to align with incoming direction}
\ENDIF
\STATE $\mathrm{append}(T_d, \mathbf{x})$
\STATE $\mathbf{v} \leftarrow \mathbf{u}$ \COMMENT{save new position to path so far}
\UNTIL {$\mathrm{CL}(\mathbf{D}(\mathbf{x})) < \mathrm{CL}_{\mathrm{min}}$} \COMMENT{anisotropy measure CL quantifies numerical certainty in $\mathbf{e}_1$}
\ENDFOR
\RETURN $\mathrm{concat}(\mathrm{reverse}(T_-), [\mathbf{p}], T_+)$
\end{algorithmic}
\end{algorithm}
%
in Algorithm~\ref{alg:tracto}. Euler integration is used here for
illustration, but Runge-Kutta integration is also common~\cite{GLK:NR}.
The termination condition for tractography is when the numerical
certainty of the principle eigenvector $\mathbf{e}_1$ is too low to
warrant further integration. The CL anisotropy measure is, like FA, a
scalar-valued tensor invariant~\cite{GLK:Westin2002}. The result of
tractography is a list of vertex locations along a polyline that
roughly indicates the course of axon bundles through the brain.
\subsection{Particle systems}
\label{sec:particles}
Some tasks can be naturally decomposed into computational units, which
can be termed \emph{particles}, that individually implement some
uniform behavior with respect to their environment so that their
iterated collective action represents the numerical solution to an
optimization or simulation. Such particle systems have a long history
in computer graphics for, among other applications, the simulation and
rendering of natural phenomenon~\cite{GLK:Reeves1983}, the organic
generation of surface textures~\cite{GLK:Turk1991}, and the
interactive sculpting and manipulation of surfaces in
three-dimensions~\cite{GLK:Szeliski1992,GLK:Witkin1994}. Other recent
work, described in more detail below, solves biomedical image-analysis
tasks with data-driven particle systems, indicating their promise for
a broader range of scientific applications. Particle systems
represent a challenging but rewarding target for our proposed
domain-specific language. While the rays and paths of volume rendering
(\secref{sec:dvr}) and fiber tractography
(\secref{sec:fibers}) do not interact with each other and thus
can easily be computed in parallel, particle systems do require
inter-particle interactions, which constrains the structure of
parallel implementations.
Particle systems recently developed for image analysis are designed to
compute a uniform sampling of the spatial extent of an image feature,
where the feature is defined as the set of locations satisfying some
equation involving locally measured image attributes. For example,
the isosurface $S_i$ (or implicit surface) of a scalar field
$f(\mathbf{x})$ at isovalue $v$ is defined by $S_v = \{\mathbf{x} |
f(\mathbf{x}) = v\}$. Particle systems for isosurface features
have been studied in
visualization~\cite{GLK:Crossno1997,GLK:Meyer2005,GLK:Meyer2007a}, and
have been adapted in medical image analysis for group studies of shape
variation of pre-segmented objects~\cite{GLK:Cates2006,GLK:Cates2007}.
In this approach, the particle system is a combination of two components.
First, particles are constrained to the isosurface by application
of Algorithm~\ref{alg:iso-newton},
%
\begin{algorithm}
\caption{$\mathrm{constr}_v(\mathbf{x})$: move position $\mathbf{x}$ onto isosurface
$S_v = \{\mathbf{x} | f(\mathbf{x}) = v_0\}$}
\label{alg:iso-newton}
\begin{algorithmic}
\REPEAT
\STATE $(v,\mathbf{g}) \leftarrow (f(\mathbf{x}) - v_0,\nabla f(\mathbf{x}))$
\STATE $\mathbf{s} \leftarrow v \frac{\mathbf{g}}{\mathbf{g}\cdot\mathbf{g}}$ \COMMENT{Newton-Raphson step}
\STATE $\mathbf{x} \leftarrow \mathbf{x} - \mathbf{s}$
\UNTIL{$|\mathbf{s}| < \epsilon$}
\RETURN $\mathbf{x}$
\end{algorithmic}
\end{algorithm}
%
a numerical search computed independently for each particle. Second, particles
are endowed with a radially decreasing potential energy function $\phi(r)$, so that
the minimum energy configuration is a spatially uniform sampling of the surface
(according to Algorithm~\ref{alg:particles} below).
Our recent work has extended this methodology to a larger set of image
features, ridges and valleys (both surfaces and lines), in order to detect and sample image
features in unsegmented data, which is more computationally demanding
because of the feature definition~\cite{GLK:Kindlmann2009}. Ridges
and valleys of the scalar field $f(\mathbf{x})$ are defined in terms of
the gradient $\nabla f$ and the eigenvectors $\{\mathbf{e}_1,
\mathbf{e}_2, \mathbf{e}_3\}$ of the Hessian $\mathbf{H} f$ with
corresponding eigenvalues $\lambda_1 \geq \lambda_2 \geq \lambda_3$
($\mathbf{H} \mathbf{e}_i = \lambda_i
\mathbf{e}_i$)~\cite{GLK:Eberly1996}. A ridge surface $S_r$, for
example, is defined in terms of the gradient and the minor eigenvector
$\mathbf{e}_3$ of the Hessian: $S_r = \{\mathbf{x} | \mathbf{e}_3(\mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0 \}$.
Intuitively $S_r$ is the set of locations where motion
along $\mathbf{e}_3$ can not increase the height further.
%
\begin{algorithm}
\caption{$\mathrm{constr}_r(\mathbf{x})$: move position $\mathbf{x}$ onto ridge surface
$S_r = \{\mathbf{x} | \mathbf{e}_3(\mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0 \}$}
\label{alg:sridge}
\begin{algorithmic}
\REPEAT
\STATE $(\mathbf{g},\mathbf{H}) \leftarrow (\nabla f(\mathbf{x}),
\mathbf{H} f(\mathbf{x}))$
% \STATE $(\{\lambda_1,\lambda_2,\lambda_3\},
% \{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}) = \mathrm{eigensolve}(\mathbf{H})$
\STATE $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\} = \mathrm{eigenvectors}(\mathbf{H})$
\STATE $\mathbf{P} = \mathbf{e}_3 \otimes \mathbf{e}_3$ \COMMENT{tangent to allowed motion}
\STATE $\mathbf{n} = \mathbf{P} \mathbf{g} / |\mathbf{P} \mathbf{g}|$ \COMMENT{unit-length direction of constrained gradient ascent}
\STATE $(f',f'') = (\mathbf{g} \cdot \mathbf{n}, \mathbf{n} \cdot \mathbf{H} \mathbf{n})$ \COMMENT{directional derivatives of $f$ along $\mathbf{n}$}
\IF {$f'' \geq 0$}
\STATE $s \leftarrow s_0 |\mathbf{P} \mathbf{g}|$ \COMMENT{regular gradient ascent if concave-up}
\ELSE
\STATE $s \leftarrow -f'/f''$ \COMMENT{aim for top of parabola (2nd-order fit) if concave-down}
\ENDIF
\STATE $\mathbf{x} \leftarrow \mathbf{x} + s \mathbf{n}$
\UNTIL{$s < \epsilon$}
\RETURN $\mathbf{x}$
\end{algorithmic}
\end{algorithm}
%
The numerical search in Algorithm~\ref{alg:sridge} to locate the ridge surface
is more complex than Algorithm~\ref{alg:iso-newton} for isosurfaces, but it
plays the same role in the overall particle system.
After the definition of the feature constraint, the other major piece
in the system is the per-particle energy function $\phi(r)$, which
completely determines the particle system energy.
$\phi(r)$ is maximal at $r=0$ and decreases to $\phi(r)=0$ beyond an $r_{\mathrm{max}}$,
determining the radius of influence of a single particle. Particles move
in the system only to minimize the energy due to their neighbors.
%
%\begin{algorithm}
%\caption{$\mathrm{push}(\mathbf{p}, \mathcal{R})$: Sum energy $e$ and force $\mathbf{f}$
%at $\mathbf{p}$ from other particles in set $\mathcal{R}$ due to potential function $\phi(r)$}
%\label{alg:push}
%\begin{algorithmic}
%\STATE $(e,\mathbf{f}) \leftarrow (0,\mathbf{0})$
%\FOR{ $\mathbf{r}$ \textbf{in} $\mathcal{R}$ }
% \STATE $\mathbf{d} \leftarrow \mathbf{p} - \mathbf{r}$
% \STATE $e \leftarrow e + \phi(|\mathbf{d}|)$
% \STATE $\mathbf{f} \leftarrow \mathbf{f} - \phi'(|\mathbf{d}|)\frac{\mathbf{d}}{|\mathbf{d}|}$ \COMMENT{force is negative spatial gradient of energy}
%\ENDFOR
%\RETURN $(e,\mathbf{f})$
%\end{algorithmic}
%\end{algorithm}
%
\begin{algorithm}
\caption{Subject to constraint feature $\mathrm{constr}$,
move particle set $\mathcal{P}$ to energy minima, given
potential function $\phi(r)$, $\phi(r) = 0$ for $r > r_{\mathrm{max}}$}
\label{alg:particles}
\begin{algorithmic}
\REPEAT
\STATE $E \leftarrow 0$ \COMMENT{total system energy sum}
\FOR{ $\mathbf{p}_i \in \mathcal{P}$}
\STATE $\mathcal{R} \leftarrow \mathrm{neighb}(\mathbf{p}_i)$ \COMMENT{$\mathcal{R} \subset \mathcal{P}$ is all particles within $r_{\mathrm{max}}$ of $\mathbf{p}_i$}
\STATE $e \leftarrow \sum_{\mathbf{r} \in \mathcal{R}}{\phi(|\mathbf{p} - \mathbf{r}|)}$ \COMMENT{sum energy contributions from neighbors}
\STATE $\mathbf{f} \leftarrow - \sum_{\mathbf{r} \in \mathcal{R}}{\phi'(|\mathbf{p} - \mathbf{r}|)\frac{\mathbf{p} - \mathbf{r}}{|\mathbf{p} - \mathbf{r}|}}$ \COMMENT{force is negative spatial gradient of energy}
\REPEAT
\STATE $\mathbf{p}' \leftarrow \mathbf{p}_i + s_i \mathbf{f}$ \COMMENT{particle $\mathbf{p}_i$ maintains a gradient descent step size $s_i$}
\STATE $\mathbf{p}' \leftarrow \mathrm{constr}(\mathbf{p}')$ \COMMENT{re-apply constraint for every test step}
\STATE $e' \leftarrow \sum_{\mathbf{r} \in \mathrm{neighb}(\mathbf{p}')}{\phi(|\mathbf{p}' - \mathbf{r}|)}$
\IF {$e' > e$}
\STATE $s_i \leftarrow s_i / 5$ \COMMENT{decrease step size if energy increased}
\ENDIF
\UNTIL {$e' < e$}
\STATE $\mathbf{p}_i \leftarrow \mathbf{p}'$ \COMMENT{save new position at lower energy}
\STATE $E \leftarrow E + e'$
\ENDFOR
\UNTIL {$\Delta E < \epsilon$}
\end{algorithmic}
\end{algorithm}
%
Each iteration of Algorithm~\ref{alg:particles} moves particles individually
(and asynchronously) to lower their own energy, ensuring
that the total system energy is consistently reduced, until the system
reaches a local minimum in its energy configuration. Experience has shown that
the local minima found by this gradient descent approach are sufficiently
optimal for the intended applications of the system.
%
\begin{figure}[ht]
%\def\subfigcapskip{-1pt} % between subfig and its own caption
%\def\subfigtopskip{4pt} % between subfig caption and next subfig
%\def\subfigbottomskip{-4pt} % also between subfig caption and next subfig
%\def\subfigcapmargin{0pt} % margin on subfig caption
\centering{
\subfigure[Lung airway segmentation]{
\includegraphics[width=0.264\columnwidth]{pictures/lung-ccpick-iso}
\label{fig:lung}
}
\hspace{0.02\columnwidth}
\subfigure[Brain white matter skeleton]{
\includegraphics[width=0.6292\columnwidth]{pictures/wb}
\label{fig:brain}
}
}
\caption{Example results from particle systems in lung CT (a) and
brain DTI (b).}
\label{fig:lung-brain}
\end{figure}
%
Figure~\ref{fig:lung-brain} shows some results with our current particle system~\cite{GLK:Kindlmann2009}.
The valley lines of a CT scan, sampled and displayed in Fig.~\ref{fig:lung}
successfully capture the branching airways of the lung (as compared to a previous
segmentation method shown with a white semi-transparent surface). The ridge surfaces
of FA in a brain DTI scan, displayed in Fig.~\ref{fig:brain}, capture the
major pieces of white matter anatomy, and can serve as a structural skeleton for white matter analysis.
Further improvement of the method is currently hindered by the complexity of its C implementation (in Teem~\cite{teem}),
and the long execution times on a single CPU.
\subsection{Future analysis tasks}
\label{sec:future}
Image analysis remains an active research area, and many experimental
algorithms diverge from those shown above. Many research methods,
however, conform to or build upon the basic structure of the
algorithms show above, and our proposed language will facilitate
their implementation and experimental application. For example,
the fiber tractography in \secref{sec:fibers} computed
paths through a pre-computed field of diffusion tensors, which
were estimated per-sample from a larger collection of diffusion-weighted
magnetic resonance images (DW-MRI).
State-of-the-art tractography algorithms interpolate the DW-MRI values,
and perform the per-point model-fitting and fiber orientation computations inside the
path integration~\cite{GLK:Schultz2008,GLK:Qazi2009}. However, these advanced methods
can also be expressed as a convolution-based reconstruction followed by
computation of non-linear derived attributes, so the basic structure of the
tractography Algorithm~\ref{alg:tracto} is preserved. Being able to
rapidly create new tractography methods with different models for diffusion
and fiber orientation would accelerate research in determining brain
structure from diffusion imaging.
Many other types of image features can conceivably be recovered with
the types of particle systems described in Section~\ref{sec:particles}
and Algorithm~\ref{alg:particles}. Essentially, implementing the
routine for constraining particles to stay within a new feature type
(analogous to Algorithms~\ref{alg:iso-newton} and~\ref{alg:sridge} for
isosurfaces and ridges) is the hard part of creating a particle system
to detect and sample the feature. One classical type of image feature
is Canny edges, which are essentially ridge surfaces of gradient
magnitude~\cite{GLK:Canny1986}. Although these are well-understood
for two-dimensional images, they have been applied less frequently in
three-dimensional medical images, and very rarely to derived
attributes of multi-variate data. One could imagine, for example, a
particle system that computes a brain white matter segmentation by finding Canny
edges of the fractional anisotropy (FA) of a diffusion tensor field.
The variety of attributes (and their spatial derivatives) available in
modern multi-variate medical images suggests that there are many
analysis algorithms that have yet to be explored because of the
daunting implementation complexity and computation cost.