16 7月 what is holography?
Perhaps you’ve never heard of “holography”, but chances are you have heard of a “hologram”. It may sound like a complicated process, or futuristic technology, but, in reality, the idea is simple. You don’t need to have a PhD in optics to understand how it works.
The example below details how holography works when incorporated in a microscope…
the physical set-up
The set-up is simple; a camera faces a light (a “point source” of light – one beam that fans out; think a very tiny flashlight) and the camera takes 1 image (or picture), capturing everything between the light and the camera. Objects that are located within the light cone (green) will be imaged by the camera.
Figure 1: the physical set-up of holography: the point source & camera, between which lies the Sample Space
holography? sounds complicated…
The image captured by the camera is called a “hologram”: three-dimensional (3D) information, stored in a two-dimensional (2D) space.
The real power in holography (ie capturing the holograms) is that it provides 1000 times larger depth-of-focus (ie the z-direction, or the “up/down” direction) than traditional microscopes.
Figure 2: the hologram (left) and an object in 3D view from the hologram (right)
Each of the “ripples” in the hologram (left) represent an object in the volume. Recall, that though the hologram is 2D, the information contained in it is 3D. Thus, for each object in the hologram (again, represented by the ripples), holography provides 3D data: we know where the object is located within the volume (again, think of the green cone of light in Figure 1), and we can use information about how “bright” the object looks to get an idea of the thickness of the object.
What holography is not (yet)
Unlike in Star Wars, holograms (or at least ours) cannot recreate a multi-angle 3D object. It is possible to use more than 1 camera to capture images from different angles to recreate a hologram from more than one side (it’s an emerging technique), however, we image one side, thus we cannot see the “back” of the object. Recall, however that with a traditional microscope, you cannot see the back of the object either.
well that looks easy…
Really holography isn’t that complicated, so why haven’t much about “real (not Star Wars) holography”?
Well, the complicated part lies in the algorithm (ie the math). Recall from Figure 2, each ripple is an object. However, looking at the hologram, can you tell what kinds of objects are in the volume? To me, it looks like a bunch of circles, and none of the circles look like it could be the complicated object in the picture in the right panel of Figure 2.
The algorithm (again, math) allows the user see a slice of the volume; just a snapshot of the entire hologram (again, 3D in 2D). So as the user moves through the volume (which is the same as focusing with a traditional microscope), they are shown one slice or “reconstruction”. The complicated math (algorithm) is what allows the software to quickly (milliseconds) show the user each plane. Figure 3 shows different planes in the volume, and what a plane/reconstruction looks like.
Figure 3: A visual representation of the volume through slices
that’s cool, but how useful is it?
Okay, no more mention of math…
While this is a really cool technology, more importantly, it is a very USEFUL technology.
With the large volume (due to the large depth-of-focus inherent to holography), and a fast sample rate (22 fps), users can save time by collecting data very quickly; 1 L of liquid can be sampled in 8 minutes. Comparatively, to look at 1L of water with a traditional microscope (assuming that it takes 10 min to look at 15 ml of liquid) takes over 10 hours!
The simplicity of the system removes the need for sample preparation – no filtering or staining – again, saving users time by removing sample preparation.
Finally, users can save time with automatic object classification; train the system to recognize certain particles and it will process data and sort particles into groups, with high accuracy.
Figure 4: Two different ocean algae. I can’t tell the difference between them, but Stingray (the automatic object classifier) can
Holographic microscopes can be used to replace traditional microscopes, particularly for objects 2mm and less (around the size of a grain of rice). Underwater holographic microscopes (yes, very cool: http://4-deep.com/products/submersible-microscope/ ) allow users to put the microscope directly into liquids. These include water, for environmental sampling, ocean sampling, liquid quality control for beverages, like wine, beer and soft drinks.
Users can image a large range of particle sizes with one device – there are no lens or objectives to change to sample all small particles in a particular sample.
accurate measurements (quantification)
Holography allows users to observe and measure particle positions and velocities in 3D and through time. This makes it possible to understand microorganism’s movement.
view the natural environment
Using holography allows users to track and view objects in their natural environment. The environment is sampled directly (which is called in-situ) and the data can be accessed very nearly real-time, providing critical information in a practical amount of time.
point source: is simply the light that faces the camera, to light-up (illuminate) whatever is getting imaged. In more science-y words, it’s a laser that emits a particular wavelength of light
holography: collecting holograms
hologram: 3D information stored in a 2D image
in-situ: observing in the natural environment; putting a device right into the sample