GET THE APP

Fingerprints, Facial Recognition and Cancer
Drug Designing: Open Access

Drug Designing: Open Access
Open Access

ISSN: 2169-0138

+44 1223 790975

Review Article - (2016) Volume 5, Issue 2

Fingerprints, Facial Recognition and Cancer

Roy J Vaz*
Division of Graduate Professional Studies, Rabb School of Continuing Studies, Brandeis University, Waltham, MA 02453, USA
*Corresponding Author: Roy J Vaz, Division of Graduate Professional Studies, Rabb School of Continuing Studies, Brandeis University, Waltham, MA 02453, United States, Tel: 781 434 3413 Email:

Abstract

Images are unstructured data and are large in computer data storage size. Standard processes for Fingerprint Analysis, Facial Recognition and Treatment of Histopathological images for Diagnosis and Prognosis are utilized to show that an image can be processed to deliver structured data. This structured data is usually subjected to Machine Learning or Artificial Intelligence Analysis run on the associated server. The images themselves could be stored at locations where accession need not be as fast. The latest Artificial Intelligence tool (Deep Learning) is quite impressive and has been used in quite a few applications. The promise of its use with histopathological images could revolutionize the field of Breast Cancer prognosis and perhaps, even better treatments.

<

Introduction

Most Big Data sources are unstructured [1]. “Virtually no analytics directly analyze unstructured data. Unstructured data may be an input to an analytic process.” Franks [1] gives the example of television shows like CSI where direct matching of fingerprints and facial images is shown to happen often. Fingerprint images cannot be compared directly since fingerprint images are unstructured data. Also, a high quality fingerprint image can be quite large in size. So unlike on CSI, fingerprints are first analyzed and a set of important points are identified on each print. The points help create a graph. It is the graphs from different fingerprints that are matched. The graph is fully structured and smaller in size. While unstructured prints are an input to the process, the actual analysis to match them up does not use the unstructured images but rather structured information extracted from them. The images themselves could be stored on slower disk drives and could be used for verification once a match is obtained, but the graphs are stored in databases and are used for matching.

Facial images have also been in the news recently. A special issue of Science Magazine edited by the American Association for the Advancement of Science published a special issue on Jan 31, 2015 entitled “The End of Privacy” and had a collection of topics [2] related to Privacy issues. The second article [3] in that collection described Facebook’s DeepFace system directed by New York University’s, Yann LeCun and reports a predictive accuracy of 97.35% compared to humans who get it right 98% of the time on the Labeled Faces in the Wild (LFW) face dataset. The current use of Deep Face is limited to the confines of Facebook.

In this paper, more detail into how a facial image is used to extract a 3D image will be discussed. Similar to Fingerprints, matching is done using 3D information and the 3D representation is usually stored online whereas the actual facial images could be stored for slower retrieval, perhaps for verification as in fingerprints. Some of the artificial intelligence tools which include machine learning tools and techniques used for matching of the maps/polygons, also called graphmatching, will be mentioned.

While recognizing that an image contains or is a face is a challenge in itself (“face recognition” is a term that will be reserved for recognition that a subsection of an image is a human face), it is not as complicated as matching faces to individuals or facial recognition. Only facial recognition (and not face recognition) will be discussed in this paper although efforts that went into face recognition did help with progressing facial recognition.

Similar methodology is also used for diagnostic (computer aided diagnostics – CAD) and prognosis (computer aided prognosis – CAP) of Breast Cancer using tissue samples also called histopathology. Histopathology is the study of the signs of disease using microscopic examination of a biopsy that is processed and fixed onto glass slides [4]. Cytopathology is related to the study of cells in terms of structure, function and chemistry. Histopathology and cytopathology imagery is the most commonly encountered for both disease screening and biopsy purposes. To visualize different components of the tissue under a microscope, tissue samples are stained with one or more dyes, fluorescent agents and other biological tools. Some dyes and fluorescent agents reveal cellular components and proteins in quantifiable amounts and counter-staining is used to provide contrast. Technology in both areas – dyes and fluorescent agents as well as biological methods, whereby single cells can be isolated and imaged [5] have improved tremendously. There has been much progress using histopathological and cytopathological imaging especially for CAP but much more can be accomplished especially in this arena. Learning and applying what has been done in the area of fingerprint and facial recognition could help.

Some basic information about the parts of the cell (almost every cell has the same parts also called organelles) and two out of four of the most predominant cell types, epithelial and stromal cells, are shown in Figure 1 and described briefly below:

Figure

Figure 1: Representation of a cell with organelles (nucleus is the primary organelle). Epithelial cells and stromal cells (connective tissue) – 2 out of the 4 most common cell types in the body (Cell Image, epithelial cell definition, stromal cell definition).

Epithelial cells are the cells that make up the lining of cavities in our bodies and also that cover flat surfaces of the body [6,7].

Stromal cells form most of the connective tissue in the body such as bone, muscle and adipose tissue [8].

Again structured data will be obtained via analysis from unstructured data and the structured data utilized. Trying to get these images and the analytics used to translate the unstructured data to structured data as well as further to improve the machine learning tools for better prediction and hopefully to the same level of accuracy for both CAD and CAP would be a lifesaving set of tools [9]. Trying to get awareness to the regulatory and other bodies, increase publicprivate partnerships in the area of technology growth and in particular, image diagnostics not only for breast cancer but for cancer in general, would be beneficial for the patient. Trying to bridge the commonality of methods and the progress in analytics from the other two image types, towards increasing the size of the databases of tissue images as well as towards improving prediction quality, would be tremendous and a worthwhile goal for this paper.

These three image types set the scene therefore for exploration of other captured images as well.

Captured images could be those from imprints (such as fingerprints) or from cameras, probes or microscopes. Hand drawings or paintings especially of objects such as faces used for facial recognition, could be of interest and could be subjected to some of the analytics discussed in this paper but will not be addressed. Computer generated drawings and images could also be subjected to similar analytics but will also not be addressed here. Only the three types of images mentioned will be used to exemplify the analytics – Fingerprints, Facial recognition and Histopathological images. With the progression of computational power, the granularity of the analytics could be reduced to pixel-level detail and the corresponding matching algorithms could be parallelized to improve accuracy and speed.

Recent advances in Machine Learning, in particular Deep Learning has been applied to various areas of science [10] and has shown distinct advantages over older algorithms.

History

History of fingerprints

Images of the three different types have their own history. Fingerprints originated from handprints which were used by Sir WJ Herschel in 1858 in India to “scare” the local people by the ruling British from not repudiating a contract [11]. Herschel later changed to having only the right index and middle finger prints. The first published article was in the journal, Nature in 1880 by Faulds and Tradoux [12]. In the US, Thompson, was the first to use his thumbprint in a document to prevent forgery in 1882 [11]. In 1892 the first crime was solved using fingerprints by Sir F Galton (a relative of Charles Darwin) who established the uniqueness as well as permanence of fingerprints.

Starting in 2007, the US Department of Homeland Security (USDHS), required all visitors to the US to get fingerprinted (all 10 fingers). Their fingerprint database, Automated Fingerprint Identification System (AFIS) contains over 100 million –both 2 finger fingerprints and 10 finger fingerprints. In 2007 the USDHS started requiring a 10 finger fingerprint record. The 2 finger fingerprint method & image is not recognized by Interpol or the Federal Bureau of Investigation (FBI) which maintains its own Integrated AFIS which contains 10 finger “rolled” [13] fingerprints for about 60 million (criminal as well as civil applicants) individuals. Each state currently has its own AFIS which have fingerprints of individuals that don’t occur in other databases [11].

History of facial recognition

While Fingerprints are a permanent element in a person, the persons face can change over time. In addition, issues such as the head pose and tilt illumination, facial expression and occlusion due to objects such as scarves, sunglasses, hats or facial hair, complicate facial recognition. Therefore the challenge is greater than that for treating fingerprint images. Early facial recognition required manual intervention and dates back only to the 1960s [14].

The primary advances were from seminal work using a nonsupervised technique called Principal Component Analysis (PCA) on a small set of facial images and is described in a paper by Sirovich and Kirby [15]. This technique, called Eigen faces, utilized all the pixels and the associated grayscale values of a two-dimensional (2D) image of a face and it required that the picture be a frontal image. Use of Eigen faces by Turk and Pentland [16] showed that this technique could be used for recognition. With the advent of Elastic Bunch Map Graphing, it set the stage for three-dimensional (3D) models and a different representation instead of graphs. In addition, a 3D model (called an Avatar) needed to be constructed from the 2D image. The surface of the model is enhanced for A-PIEO [3] – aging, pose, illumination, expression and occlusion and the 3D model of the face is then subjected to analytics and the representation is stored. Again, unstructured data is converted to structured data which is utilized for storage and learning.

With respect to the databases of facial images, there are many public domain datasets [17] so that algorithms can be developed and tested as for Fingerprints. The databases available include 2D and 3D aligned and non-aligned images including those (Labeled Faces in the Wild – LFW) utilized in Deep face for comparative validation.

History of histopathologial images

Even though this report will focus on Breast Cancer, other types of cancer as well as the use of images in those cancer types will be mentioned.

Imaging in the medical diagnostics area has always and is still dominated by Radiography. Medical Imaging can be traced back to digital mammography to the 1990s [18].

Radiography data, however, is limited in spatial resolution – “in mammography, CAD methods have been developed to automatically identify or classify mammographic lesions. In histopathology, on the other hand, simply identifying presence of absence of cancer or even the precise spatial extent of cancer may not hold as much interest as more sophisticated questions such as: what is the grade of the cancer? Further at the histological (microscopic) scale one can begin to distinguish between subtypes of cancer, which is quite impossible (or at the very least difficult) at the coarser radiological scale” [19]. As has been previously mentioned and shown the spatial resolution of histopathological and cytopathological images can be resolved to not only single cells but to nuclei of the cells themselves. Hence this report will focus on processing images obtained from histopathological images more towards CAP. CAD derived from histopathological images is the gold standard for most types of cancer – in particular certain characteristics of nuclei are hallmarks of cancerous conditions [19]. Cytopathology images (cell derived images) have helped in general. Quantitative metrics for cancerous nuclei were developed to “appropriately encompass the general observations of the experienced pathologist and were tested on cytopathology imagery.” Early work in trying to analyze breast tissue images was pioneered in the 1990’s [20-22].

Details related to image processing and the machine learning methods will be described in the next section.

Unstructured to Structured

Fingerprint image processing

Of the three types of images described in this paper, the fingerprint image has been given the most attention. Technically, the problem has been resolved to a 2D image due to the “rolled” fingerprint images stored at the FBI website [13]. Although there are issues with matching latent skin-distorted fingerprints [23,24] they have been overcome thereby keeping the problem in 2D. A fingerprint is usually composed of a set of parallel lines (ridges) which “flow” in the same direction forming a ridge pattern. These patterns have characteristics shown in Figure 2 which are unique to every fingerprint. The common steps involved in Fingerprint image storage for database creation involves:

Figure

Figure 2: Three levels of features, the corresponding graph constructed and the graph matching to identify fingerprints.

• Fingerprint Capture

• Fingerprint image enhancement using feature extraction

• Database storage

Fingerprint image capture

There are many instruments used to do this and the standard operating procedures are well documented and will not be described further [13].

Fingerprint image enhancement using feature extraction: This step involves several iterations where by the initial image is subjected to image enhancement and then features categorized into 3 levels (Figure 2) extracted. The first levels of features include whorls, deltas and other general patterns, the second level includes directionality of the ridgelines as well as bifurcations and terminations and the third level (depending on the image resolution) could include pores, incipient ridges and creases [25]. The features are then assembled to create a graph. A graph is described as a collection of indices and paths (lines joining the indices) (Figure 3). The indices could have properties associated with them and so could the paths. If needed a fingerprint image could be reconstructed from the features and compared to the original image in cases where the image is of poor quality.

Figure

Figure 3: Graph G1 is matched as a sub-graph of G2.

Database storage

The original image and extracted information (structured information) can be stored in 2 separate locations – the extracted information, the features and graphs, on network associated storage (NAS), where the analytics are run.

The graphs are structured information and are usually small in size. The original fingerprint images and the enhanced images themselves could be stored on disks or media that do not have to meet the access speeds of the NAS and these would be retrieved only for verification and validation.

An unknown, latent fingerprint would be subjected to similar analysis as described, shown in Figure 2. The problem of fingerprint detection could be seen as a graph sub matching problem (Figure 3) viz does the graph representing the unknown latent fingerprint exist in whole or part in the graphs stored for the database of fingerprints?

Once the graph derived from the latent print is matched in part or whole to a graph of a fingerprint in the database, the corresponding fingerprint image could be retrieved and overlaid on the graph that has a match in the database. The latent fingerprint graph could match a single or many graphs from the fingerprint database depending on how much information was obtained from the unknown, latent print.

The Machine Learning tools required described for the methods described are well documented in Irniger [26].

Instead of using a graph representation, newer representations using full image information such as those described for facial recognition below are currently being evaluated [27].

Perhaps utilizing the methods and technology from Facial recognition could make Fingerprint identification more dependable and quicker. In any case, an unstructured image is converted to a structured data form which can then be searched using artificial intelligence tools.

Facial recognition process

Facial Recognition Technology started off along the same lines as Fingerprint analysis but researchers in the field realized that keeping the image to the 2D realm did not really help with prediction accuracy. The 2D techniques utilized similar methods to fingerprint recognition.

A set of 35 features derived from the Eigen faces [16] and other efforts, manually selected as well as using a modified graph matching (elastic graph matching) algorithm (Figure 4) to allow for different orientations and expressions that the face could show, increased the accuracy but automation of the process suffered.

Figure

Figure 4: 2D with EBGM facial matching.

Also several images of the same face at different angles from one profile to the other, with similar lighting as well as tilt were needed in order to make the method reliable [28]. This lead to different artificial Intelligence approaches to be used after breaking up the image into almost pixel-level sub images.

However, this only led to improvement of prediction when the same face was pictured in different orientations and with the same tilt, expression and lighting. Otherwise the predictions were worse. This therefore led to methods to remove issues related to A-PIEO [3].

It was about the time that 3D models for anatomic modeling were being developed [29]. Deep face [30,31] in addition to building a 3D model, an avatar, ensures correct surface texture as shown in Figure 5. The steps involved are:

Figure

Figure 5: Steps involved in creating the 3D aligned image for deposition in the database.

• Face detection in an image

• Building a 3D model with correct surface texture to match the original image

• Frontal alignment

Once the steps are completed, the aligned face is then stored as 152 × 152 pixels together with the red, green and blue colors of each pixel. Therefore, the structured data comprises a 3D matrix of 152 × 152 × 3.

For facial recognition, the “unknown” face is subjected to the same steps but an even more advanced artificial intelligence system called Deep Learning is applied [32].

This algorithm has been enriched and tested using the Image Net database of Images described in the TED Lecture released on March 23, 2015 [33].

It was interesting to note that while analyzing the False Negatives (1.65%) for facial images from the LFW, those faces had issues related to A-PIEO. There seemed to be no trend for the facial images for False Positives (1%) [30].

An even more detailed analysis based on dividing the image into sub-images and applying a neural net algorithm and further sub image sub division has been done and the accuracy arising from this application is even higher than that of Deep Face for the LFW dataset [34].

Histopathological images processing

One of the biggest steps in increasing and improving information derived from histopathological images is not from microscopy or computational aspects but from laboratory progress.

The use of sequential labeling and imaging of different fluorescent agents and corresponding cellular proteins from the same sample was a huge step forward [4,35]. This allows an order of magnitude increase in the number of molecular markers to be imaged for the same tissue section.

Cancerous epithelial cells typically have large round nuclei and a high count of cell having these types of nuclei is a primary diagnostic [36]. This has been very well documented and validated and is obtained in a very simple manner. It is used as the gold standard for Breast cancer CAD.

It has been well established that breast cancer manifests itself in the mammary epithelium but there is evidence that mammary stromal cells play an important role in tumorigenesis as well [37]. An attempt called Computational Pathologist (C-Path) to automate image processing to utilize this information as a prognosis and treatment marker used the following steps [38]:

248 tissue samples from the Netherlands (NKI) and another 328 from Vancouver (VGH) were used as learning and test sets respectively. Both sets of images were subject to a procedure, each as follows:

• Stromal cells (green in Figure 6) were distinguished from epithelial cells (red in Figure 6)

Figure

Figure 6: The three steps in a CAP study.

• Markers from epithelial cells and Stromal cells were used to obtain information such as:

• Epithelial cell nuclear neighbor distance

• Relationship (distance) between morphologically regular and irregular epithelial nuclei

• Relationship between epithelial nuclei and the rest of the same cell

• Characteristics of stromal nuclei, the cell and rest of the stromal cells

The tissue samples were further classified as whether they were from patients alive or not alive at a 5 year time point after obtaining the tissue sample. Classification using the tissue derived markers was attempted to see if there were different marker characteristics between patients who were alive at the 5 year time point using the NKI set. The classification scheme was then utilized to see how predictive this was for the VGH set.

In addition to being quite predictive (95%), it was learned that several features related to stromal cell spatial patterns were prognosticators of patient survival (as opposed to features from the actual cancerous epithelial cells) as was previously postulated [37].

In addition to the two cell types, white blood cells – a disease fighting cell which is part of the body’s immune system (also called Lymphocytes) -can also be distinguished in tissues. A newer study including those cell types also seems to improve prognosis [36]. A summary of the process is shown in Figure 7.

Figure

Figure 7: A three cell analysis for Breast CAP (Yuan et al. [36]) (a-d) and further work (bottom panel) incorporating genomic analysis was very promising.

The colors for stromal (red) and epithelial cells (green) are different from the other study [38]. In the study reported in Yuan et al. [36], additional information derived from genomic characterization of the tissue (only a few cells are needed to derive this information) was found to be complementary to the image derived information in improving prognosis (Figure 7 bottom). However these data were not subjected to the full suite of artificial intelligence tools that are currently available.

Conclusions and Hopes

Clearly in all three image types, after processing the images, structured information was derived from unstructured images. The sub conclusions and hopes are listed in point form below:

Clearly Facial recognition gained from fingerprint recognition. This is most likely due to the co-localization of researchers in these fields in Computer Vision Departments at several universities. Also collaborations between image researchers and Artificial Intelligence researchers have taken this field to new levels where the predictive ability can reach that of the human.

Hope: Can the Deep Learning algorithm be used on Fingerprint recognition with better success? [10].

The TED talk from March 23, 2015 is a brilliant example of teaching the computer, image recognition. What it does not mention is the use of the Artificial Intelligence algorithm Deep Learning used in Deep face which has been described by LeCun [32] as being very successful. Li [33] makes the statement “Vision begins with the eyes but truly takes place in the brain”. Image recognition is just starting. Could 3D model development of the objects, account for the A-PIEO issues and improve recognition?

Hope: In the TED talk, mention is made of using instant image recognition to distinguish between crumpled paper and rocks when automatic cars have finally become reality. Similarly, the ability for Drones to automatically recognize objects with more certainty. Can cameras that monitor streets in some neighborhoods automatically recognize guns and crime scenes in real time and summon appropriate law enforcement? Can these cameras recognize other scenes such as people in an accident or drowning victim etc. and summon appropriate help?

Finally, with image analytics related to Cancer diagnostics and Cancer Prognosis, open data has to be considered. This will improve not only the image processing, but application of the best Artificial Intelligence Analytics would help tremendously. One of the study leaders from Beck AH from Stanford University has recently made a plea to increase open access for histopathology images and other data of various cancer types [39].

Hope: Breast and other cancer detection and prognosis become quicker, earlier and more consistent. The ability to distinguish various cancer subtypes using more information including other cell types and genomic and proteomic information, even among Breast Cancer patients, in a quicker manner might help make combination therapy (personalized medicine), a reality.

References

  1. Enserink M, Chin G (2015) The end of privacy. Introduction.Science 347: 490-491.
  2. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, et al. (2009) Histopathological image analysis: A review. IEEE Rev Biomed Eng 2: 147-171.
  3. Mullassery D, Horton CA, Wood CD, White MR (2008) Single live-cell imaging for systems biology. Essays Biochem 45: 121-133.
  4. Schneider I (2015) Increased use of tissue data in oncology: enhancing diagnostics, cutting costs and improving patients’ lives
  5. Turk MA, Pentland AP (1991) Face recognition using eigenfaces. Proc. IEEE 586-591.
  6. Mendez AJ, Tahoces PG, Lado MJ, Souto M, Vidal JJ (1998) Computer-aided Diagnosis: automatic detection of malignant masses in digitized mammograms. Med Phys 25: 957-964.
  7. Sirovich L, Kirby M (1987) Low-dimensional procedure for the characterization of human faces.J Opt Soc Am A 4: 519-524.
  8. Weind KL, Maier CF, Rutt BK, Moussa M (1998) Invasive carcinomas and fibroadenomas of the breast: Comparison of microvessel distributions – implications for imaging modalities. Radiology 208:477-483.
  9. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, et al. (2009) Histopathological image analysis: A review.IEEE Rev Biomed Eng 2: 147-171.
  10. Bartels PH, Thompson D, Bibbo M, Weber JE (1992) Bayesian belief networks in quantitative histopathology. Anal Quant CytolHistol 14: 459-473.
  11. Hamilton PW, Anderson N, Bartels PH, Thompson D (1994) Expert system support using Bayesian belief networks in the diagnosis of fine needle aspiration biopsy specimens of the breast.J ClinPathol 47: 329-336.
  12. Zaeri N (2011) Minutae-based fingerprint extraction and recognition.In: Biometrics. Yan J (edtr)InTech
  13. Jafri R, Arabnia HR (2009) A survey of face recognition techniques.  J. Inf. Process. Systems 5: 41-68.
  14. Irniger CAM (2005) Graph-matching: Filtering databases of graphs using machine learning techniques
  15. Yan J (2011) Non-minutae based fingerprint descriptor.In: Biometrics InTech
  16. Miller MI, Trouve A, Younes L(2002) On the metrics and euler-lagrange equations of computational anatomy. Ann Rev Biomed Engg 4: 375-405.
  17. Taigman Y, Yang M, Ranzato MA, Wolf L (2014b) DeepFace: Closing the gap to human-level performance in face verification. CVPR
  18. Schuert W, Bonnekoh B, Pommer AJ, Philipsen L, Bockelmann R (2006) Analyzing proteome topology and function by automated fluorescence microscopy. Nature Biotechnology 24: 1270-1278.
  19. Yuan Y, Failmezger H, Rueda OM, Ali HR, Graf S, et al. (2012) Quantitative image analysis of cellular heterogeneity in breast tumors complements genomic profiling. 4: 1-10.
  20. Beck AH, Sangoi AR, Leung S, Marinellie WJ, Nielsen TO (2011) Science translational medicine 3: 1-11.
  21. Lu C, Tang X (2014) Surpassing human-level face verification performance on LFW with Gaussian face. Proc. 29th AAAI Conf. on Artificial Intelligence
Citation: Vaz RJ (2016) Fingerprints, Facial Recognition and Cancer. Drug Des 5:133.

Copyright: © 2016 Vaz RJ. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Top