Scientists have developed an algorithm that automates a key step in forensic fingerprint analysis, which may make the process more reliable and efficient.
The first big case involving fingerprint evidence in the United States was the murder trial of Thomas Jennings in Chicago in 1911. Jennings had broken into a home in the middle of the night and, when discovered by the homeowner, shot the man dead. He was convicted based on fingerprints left at the crime scene, and for most of the next century, fingerprints were considered, both in the courts and in the public imagination, to be all but infallible as a method of identification.
More recently, however, research has shown that fingerprint examination can produce erroneous results. For instance, a 2009 report from the National Academy of Sciences found that results, “are not necessarily repeatable from examiner to examiner,” and that even experienced examiners might disagree with their own past conclusions when they re-examine the same prints at a later date. These situations can lead to innocent people being wrongly accused and criminals remaining free to commit more crimes.
But scientists have been working to reduce the opportunities for human error. This week, scientists from the National Institute of Standards and Technology (NIST) and Michigan State University report that they have developed an algorithm that automates a key step in the fingerprint analysis process.
The researchers used machine learning to build their algorithm. Unlike traditional programming in which you write out explicit instructions for a computer to follow, in machine learning, you train the computer to recognize patterns by showing it examples.
News Source: https://www.eurekalert.org/pub_releases/2017-08/nios-afa081417.php