During the first three weeks of August, I had the oppurtunity to do another internship, this time at Stanford University in California. Home to Apple and one of the world’s finest universities in technology and located in the midst of the Silicon Valley area, I was lucky to find a room to rent in Palo Alto, itself in biking distance to the campus.
The internship itself took place at the Department of Bioengineering under Prof. Pelc. Because he was not at the university in the first week (multiple conferences overlapped with my internship), I was supervised by a group of other researchers working with Prof. Pelc. After a bit of introduction into the technology of CT scans (sometimes called CAT scans) and the physics behind them, they tasked me with creating a brain phantom for a GE CT scan simulation software in order to simulate brain perfusion scanning.
A phantom is a digital 3D model used in the simulation to approximate a real patient in the scanner. This phantom was supposed to emulate a human brain, in particular the distinction between white and grey matter. White and gray matter are two different tissues in the brain. Commonly, it was assumed that white matter is the part of the brain we actually think with and grey matter is more of a stabilizer, but recent research claims that grey matter has a bigger role than previously expected. With the phantom format I was developing for, the phantom is built out of so-called primitives, like spheres, cubes or ellipsoids.
The simulation software was written in MATLAB and C, but because of compatibility issues with the computer I was working on, I had to use an old version running under FreeMat (which is very similar to MATLAB, but free). As this old version did not support some of the features from the FORBILD phantom file specification (specifically, cutting primitives at arbitrary planes in space to produce other shapes, which the old version only allowed at
x = 0,
y = 0 or
z = 0), I ended up writing a Python-based renderer for these phantom files myself. While it did not perform a complete simulation like the GE software and ignored material properties (each primitive is given a material, e.g. „bone“ or „skin“) altogether, it proved to be a faster way to take a look at the current model (45 seconds compared to between 2 and 5 minutes).