The first image is part of a large collection of immunohistochemistry images of cell-surface antigens generated by the Stem Cell Genome Anatomy Projects (SCGAP) Urologic Epithelial Stem Cells (UESC) Project. Attribution: Larry True, Eric Deutsch, Laura Pascal, Tracy Sherertz, Laura Walashek, David Campbell and Alvin Liu. http://www.cellimagelibrary.org/images/33677
Early in his career, Matthew Fronheiser, Ph.D., now a senior biomedical engineer at Bristol-Myers Squibb, worked for a medical device company, developing tools for scientists. "We used pretty images of cells to draw people's attention, but I learned that the image wasn't really what they were looking for," he says. "They wanted quantitative information."
That realization stayed with Fronheiser, and is part of the motivation for a new SLAS2015 Short Course, Digital Image Processing and Analysis for the Laboratory Scientist: Theory and Application. In collaboration with his colleague Mark Russo, Ph.D., also of Bristol-Myers Squibb, Fronheiser will share his enthusiasm and knowledge about how to "mine" images for deep and detailed information about cellular processes and structures.
Fronheiser explains why "learning how to do at least some level of image analysis will enable researchers to get answers faster, and perhaps spur investigations into more complex areas that they cannot explore with their current tools."
Why is biological image processing and analysis a hot topic right now?
Researchers are beginning to realize there is a lot of content within an image, and they can get much more from it than just a cell count, which is what is typically done. If they stain cells with a dye, for example, they can do a simple manual count of the cells that absorbed/retained the dye—or, they can do some image analysis to see whether there is something different about the cells that took up the dye versus the ones that didn't. Computer power and some scripting allow them to look at large numbers of samples quickly, rather than having someone sit at a bench and look at a screen. This is a significant benefit, because we know numbers are important. If they look at 10 slides, they'll get some idea of what might differentiate the stained cells, for example; but if they can analyze 100 slides, they'll have a better idea of what's going on because they'll be evaluating a larger population of cells. But to do that, they either have to spend a lot of time looking at individual cells or they need to do some image analysis with a computer.
What else can researchers learn from image analysis?
Image analysis can keep people from drawing the wrong conclusions about what they see. For example, researchers often will try to determine whether a potential inhibitor works by taking a picture of different types of cells in a well, and looking at the area covered by the target cells. They apply a therapeutic agent, then look to see if the amount of target cells is affected—i.e., whether the area covered by the cells gets smaller after adding an inhibitor, versus a control substance that allows the cells to continue to grow.
That approach might work in some situations. However, some cells might be affected by the inhibitor and still continue to grow. Crypt cells in the intestinal epithelium are one example. Blocking one pathway does not prevent crypt cells from growing; they just grow differently, using another pathway. If you were to look only at the amount of cell coverage in a target area, you would get a similar readout for the inhibitor and the control, and possibly conclude that the inhibitor had no effect. But if you were to look more deeply at the morphology of the cells, you would recognize that something is indeed happening. Image analysis allows you to go beyond the idea of just looking at the size of a specific area in a well; it enables you to see what is actually going on in the cells in that area.
Why can't researchers rely on vendors to produce the applications they need, instead of trying to learn image analysis themselves?
It's true that some companies are dedicating resources to bringing new biological imaging platforms to market. But creating relevant applications takes time, and even if those applications are ready to go, they may not meet your needs. What if a vendor hasn't thought of the application you want? Or, what if you are doing a one-off—something that is so small, a vendor doesn't see value in developing an entire platform for it?
Right now, there is a lot of room for investigators to collaborate with vendors to produce useful products. For example, several years ago, I worked with a vendor who had developed a great imaging device that they wanted to use as an adjunct to histology. The idea was that the device would enable researchers to do 3D analyses of tissue without going through the laborious process of doing a complete histology. We thought we could do some morphology-type analysis to extract more content from the tissue images. But we found that looking at the structure was not enough; we needed the capability of tagging the tissue. That was a gap. The vendor had originally developed the device to look at skin and some other well defined areas, whereas we saw its potential as a life sciences application—to investigate cell formation in the extracellular matrix, for example, and for vascular network analysis and investigating structures in the central nervous system. We recognized that there were gaps that had to be dealt with before we could do those things. So we ended up doing a six-month evaluation of the device, working with the vendor to fill in those gaps.
Many vendors come into an organization and immediately try to sell you a device or do a demonstration. But if you let them know exactly what you need, they might ask questions aimed at trying to create a solution. They might then feed the responses back to their marketing departments to find out how many other users might want a similar application. Very often, what starts as a custom application for a particular group ends up becoming a platform application later in the development process.
Sometimes, organizations can pay a vendor who has a product development team to develop a particular module. But for these collaborations to move forward quickly, it helps if you've been able to do some of the image analysis on your own, so you can give the team a framework in a way that they'll understand. Everything will move more smoothly if you have some image analysis knowledge on your side.
What about collaborations among different groups? Is there an issue of compatibility among image processing and analysis tools?
There can be. In the histology space, for example, several different slide scanners are available, and some have proprietary formats. So if two groups are using different hardware, it might be difficult to get everything into a standard format. Vendors are beginning to understand that having a completely closed system may be limiting, but right now, it's a fact of life in certain research areas. In the medical imaging field, most people are using the DICOM standard, and the preclinical imaging that needs to be done in that arena tends to follow the same standard. But in other areas of preclinical imaging, such as microscopy, there are no standards yet, and so compatibility can be an issue.
Should vendors be developing products that are more compatible with each other?
That depends. There is something to be said for a vendor having everything in place—hardware and software—in a single, streamlined proprietary package. The big take-home here is that researchers should take compatibility into consideration when evaluating these technologies, particularly if they want to collaborate with other labs or even across an organization. In a big organization, there may be pockets of people who don't know what others are doing or using, and that can affect the ability to share information. It depends on your application and how you will be using your tools.
What other challenges do you see as image analysis begins to play a larger role in life sciences R&D?
As people do more batch processing to speed image analyses, storage becomes an issue. It's important to be able to store and retrieve information quickly and easily. Part of this is the actual storage space. If you look at Amazon cloud and Google cloud, for example, rates for storage seem to be coming down. That means organizations can store relatively large amounts of data at a relatively low cost. To reduce the possibility that data could be compromised in that situation, cloud storage companies often can set up a dedicated space, isolated from other parts of the cloud, and implement all the security measures that a company has in place for its own infrastructure.
But the other important part is data retrieval. Scientists have to get away from naming files in a unique way; data needs to be tagged so that every image can be named the same way. Some companies are putting protocols in place to help ensure that all members of the organization are doing this.
What else do you foresee on the near horizon?
Automation will play an increasingly large role in image processing and analysis. Several vendors already have automated platforms for their well-based imaging systems. The plate maps are included in the product, so when you put your plates in, the application does a lot of the imaging. Image analysis tools are also built in, so again, provided you know how to use those tools, you can really start extracting some useful information. Eventually, there are likely to be products that have a robotic arm that loads plates automatically, and that would significantly increase the amount of data that's generated. That capability makes it even more important to deal with storage and retrieval issues, as well as how you will handle image analysis.
What can the short course offer people working at different levels in an organization?
Scientists often recognize certain features in an image that may be important to quantify, but they might not have the tools or skills necessary to extract that information from the image into a format that can be readily quantified. The goal of our course is to show participants how to use a freely available image analysis package to extract useful quantitative information from experimental images. We will also provide the notes and scripts used in the class so that participants have a starting point when applying the skills learned to their particular project.
Those with a limited image analysis background will learn the core concepts and have an opportunity to perform hands-on analysis. Those currently performing analysis on individual images will learn how to perform batch processing of images with automated data output, thereby improving their workflow. For scientists reviewing and presenting data generated by image analysis, the course provides context into the methods used to generate the data. This will enable them to better understand and explain results to audiences that have a limited image analysis background.
You're clearly passionate about biomedical imaging. How did you get into this field?
I've always been a visual person so that's probably part of it. I originally focused on medical imaging, and obtained my Ph.D. in ultrasound technologies. Before arriving at BMS, I worked on developing a photoacoustic imaging system, where laser light goes in, is absorbed by tissue and releases an ultrasonic wave. That was a combination of optical and acoustic imaging, and I saw many benefits in combining the two technologies. But my undergraduate training was in biomedical engineering, which is more general, and I had an opportunity in my current position to do a bit of both. I work with the preclinical imaging group, looking at MRI and CT data, but I can also take a step back in a broader biomedical engineering role when I'm looking at cellular images. I really like that aspect because the field is growing and I have the opportunity to learn more every day.
On Saturday, February 7, Fronheiser and biochemical engineer Mark Russo, Ph.D., an associate director at Bristol-Myers Squibb and computer science instructor at Rowan University in Glassboro, NJ, offer a practical, hands-on approach to the application of digital image processing and analysis in a life sciences laboratory. Diverse techniques, image formats, file types and applications will be covered. Participants will learn various image processing and analysis functions, including thresholding, smoothing, sharpening, edge-detection, noise removal, segmentation, particle analysis and feature extraction.
December 1, 2014