5 
SHORT TERM ============= (*needed* for streamlines & tractography) 
SHORT TERM ============= (*needed* for streamlines & tractography) 
6 
======================== 
======================== 
7 


8 

Remove CL from compiler 
9 


10 
[GLK:3] Add sequence types (needed for evals & evecs) 
[GLK:3] Add sequence types (needed for evals & evecs) 
11 
syntax 
syntax 
12 
types: ty '{' INT '}' 
types: ty '{' INT '}' 
25 
SHORTISH TERM ========= (to make using Diderot less annoying to 
SHORTISH TERM ========= (to make using Diderot less annoying to 
26 
======================== program in, and slow to execute) 
======================== program in, and slow to execute) 
27 


28 
valuenumbering optimization 
valuenumbering optimization [DONE] 
29 


30 

Allow ".ddro" file extensions in addition to ".diderot" 
31 


32 

Be able to output values of type tensor[2,2] and tensor[3,3]; 
33 

(currently only scalars & vectors). Want to add some regression tests 
34 

based on this and currently can't 
35 


36 
[GLK:1] Add a clamp function, which takes three arguments; either 
[GLK:1] Add a clamp function, which takes three arguments; either 
37 
three scalars: 
three scalars: 
43 
One question: clamp(x, lo, hi) is the argument order used in OpenCL 
One question: clamp(x, lo, hi) is the argument order used in OpenCL 
44 
and other places, but clamp(lo, hi, x) is much more consistent with 
and other places, but clamp(lo, hi, x) is much more consistent with 
45 
lerp(lo, hi, x), hence GLK's preference 
lerp(lo, hi, x), hence GLK's preference 
46 

[DONE] 
47 


48 
[GLK:2] Proper handling of stabilize method 
[GLK:2] Proper handling of stabilize method 
49 


63 
to index into complete list 
to index into complete list 
64 


65 
[GLK:6] Use of Teem's "hest" commandline parser for getting 
[GLK:6] Use of Teem's "hest" commandline parser for getting 
66 
any input variables that are not defined in the source file 
any "input" variables that are not defined in the source file. 
67 


68 
[GLK:7] ability to declare a field so that probe positions are 
[GLK:7] ability to declare a field so that probe positions are 
69 
*always* "inside"; with various ways of mapping the known image values 
*always* "inside"; with various ways of mapping the known image values 
96 


97 
"initially" supports lists 
"initially" supports lists 
98 


99 
"initially" supports lists of positions output from 
"initially" supports lists of positions output from different 
100 
different initalization Diderot program 
initalization Diderot program (or output from the same program; 
101 

e.g. using output of iso2d.diderot for one isovalue to seed the input 
102 

to another invocation of the same program) 
103 


104 
Communication between strands: they have to be able to learn each 
Communication between strands: they have to be able to learn each 
105 
other's state (at the previous iteration). Early version of this can 
other's state (at the previous iteration). Early version of this can 
122 
Alow X *= Y, X /= Y, X += Y, X = Y to mean what they do in C, 
Alow X *= Y, X /= Y, X += Y, X = Y to mean what they do in C, 
123 
provided that X*Y, X/Y, X+Y, XY are already supported. 
provided that X*Y, X/Y, X+Y, XY are already supported. 
124 
Nearly every Diderot program would be simplified by this. 
Nearly every Diderot program would be simplified by this. 
125 

[DONE] 
126 


127 
Put small 1D and 2D fields, when reconstructed specifically by tent 
Put small 1D and 2D fields, when reconstructed specifically by tent 
128 
and when differentiation is not needed, into faster texture buffers. 
and when differentiation is not needed, into faster texture buffers. 
129 
test/illustvr.diderot is good example of program that uses multiple 
test/illustvr.diderot is good example of program that uses multiple 
130 
such 1D fields basically as lookuptablebased function evaluation 
such 1D fields basically as lookuptablebased function evaluation 
131 


132 
expand trace in mid to low translation 
expand trace in mid to low translation [DONE] 
133 


134 
extend norm (exp) to all tensor types [DONE for vectors and matrices] 
extend norm (exp) to all tensor types [DONE for vectors and matrices] 
135 


187 
There is value in having these, even if the differentiation of them is 
There is value in having these, even if the differentiation of them is 
188 
not supported (hence the indication of "field#0" for these above) 
not supported (hence the indication of "field#0" for these above) 
189 


190 

Introduce region types (syntax region(d), where d is the dimension of the 
191 

region. One useful operator would be 
192 

dom : field#k(d)[s] > region(d) 
193 

Then the inside test could be written as 
194 

pos ∈ dom(F) 
195 

We could further extend this approach to allow geometric definitions of 
196 

regions. It might also be useful to do inside tests in world space, 
197 

instead of image space. 
198 


199 
co vs contra index distinction 
co vs contra index distinction 
200 


201 
Permit field composition: 
Permit field composition: 
211 
field#2(3)[] F = bspln3 ⊛ img; 
field#2(3)[] F = bspln3 ⊛ img; 
212 
or, as a tensor product of kernels, one for each axis, e.g. 
or, as a tensor product of kernels, one for each axis, e.g. 
213 
field#0(3)[] F = (bspln3 ⊗ bspln3 ⊗ tent) ⊛ img; 
field#0(3)[] F = (bspln3 ⊗ bspln3 ⊗ tent) ⊛ img; 
214 
This is especially important for things like timevarying data, or 
This is especially important for things like timevarying fields 
215 
other multidimensional fields where one axis of the domain is very 
and the use of scalespace in field visualization: one axis of the 
216 
different from the rest, and hence must be treated separately when 
must be convolved with a different kernel during probing. 
217 
it comes to convolution. What is very unclear is how, in such cases, 
What is very unclear is how, in such cases, we should notate the 
218 
we should notate the gradient, when we only want to differentiate with 
gradient, when we only want to differentiate with respect to some 
219 
respect to some subset of the axes. One ambitious idea would be: 
subset of the axes. One ambitious idea would be: 
220 
field#0(3)[] Ft = (bspln3 ⊗ bspln3 ⊗ tent) ⊛ img; // 2D timevarying field 
field#0(3)[] Ft = (bspln3 ⊗ bspln3 ⊗ tent) ⊛ img; // 2D timevarying field 
221 
field#0(2)[] F = lambda([x,y], Ft([x,y,42.0])) // restriction to time=42.0 
field#0(2)[] F = lambda([x,y], Ft([x,y,42.0])) // restriction to time=42.0 
222 
vec2 grad = ∇F([x,y]); // 2D gradient 
vec2 grad = ∇F([x,y]); // 2D gradient 
223 


224 

Tensors of order 3 (e.g. gradients of diffusion tensor fields, or 
225 

hessians of vector fields) and order 4 (e.g. Hessians of diffusion 
226 

tensor fields). 
227 


228 
representation of tensor symmetry 
representation of tensor symmetry 
229 
(have to identify the group of index permutations that are symmetries) 
(have to identify the group of index permutations that are symmetries) 
230 


232 


233 
outer works on all tensors 
outer works on all tensors 
234 


235 

Help for debugging Diderot programs: need to be able to uniquely 
236 

identify strands, and for particular strands that are known to behave 
237 

badly, do something like printf or other logging of their computations 
238 

and updates. 
239 


240 

Permit writing dimensionally general code: Have some statement of the 
241 

dimension of the world "W" (or have it be learned from one particular 
242 

field of interest), and then able to write "vec" instead of 
243 

"vec2/vec3", and perhaps "tensor[W,W]" instead of 
244 

"tensor[2,2]/tensor[3,3]" 
245 


246 

Traits: all things things that have boilerplate code (especially 
247 

volume rendering) should be expressed in terms of the unique 
248 

computational core. Different kinds of streamline/tractography 
249 

computation will be another example, as well as particle systems. 
250 


251 
Einstein summation notation 
Einstein summation notation 
252 


253 
"tensor comprehension" (like list comprehension) 
"tensor comprehension" (like list comprehension) 
254 


255 

Fields coming from different sources of data: 
256 

* triangular or tetrahedral meshes over 2D or 3D domains (of the 
257 

source produced by finiteelement codes; these will come with their 
258 

own specialized kinds of reconstruction kernels, called "basis 
259 

functions" in this context) 
260 

* Large point clouds, with some radial basis function around each point, 
261 

which will be tuned by parameters of the point (at least one parameter 
262 

giving some notion of radius) 
263 


264 
====================== 
====================== 
265 
BUGS ================= 
BUGS ================= 
266 
====================== 
====================== 