Deep Learning and Ultrasound
Current Status – 2017
Wayne Moore, B.Sc., MBA, FASE
March 14, 2017
Before going into detail in this month’s article on deep learning I want to take a minute to recognize Moore’s Law (note: not Wayne Moore, rather Gordon Moore, circa 1970). What Moore’s law says is that the number of transistors that can be packed into a given unit of space will roughly double every two years. As the reader knows, this prediction has remained impressively true over the last ~45 years; a fact that’s allowed for the economical creation of everything from the Kerbal Space Program to wireless hand-held ultrasound devices, and the continuing computerization of the economy in virtually every regard. At the risk of oversimplifying; massive computational power in 2017 is almost free compared to where it was 20 short years ago. With massive amounts of computational power comes massive amounts of data. But data isn’t information. To translate data into information, especially at the scale that we are discussing, a new software paradigm needed to be brought to life. Deep-learning software (DLS) has been expressed as an attempt to mimic the activity in the frontal lobe of the brain where complex thinking and decision making occurs. In this sense DLS learns to quite literally recognize various patterns within a digital representation of an image. As I discussed in last month’s article, when it comes to ultrasound images our ability to spot certain patterns resident within the B-mode image is limited by our visual acuity – this is not an issue with DLS.
Throwing off the constraints of reliance on visual acuity and specialty training has the theoretical promise, coupled with other factors, of allowing for earlier detection of disease process, more consistent analysis of an ultrasound image, i.e., meaning the reduction of the subjectivity in the reading of an image, and even allowing for non-specialty physicians or other healthcare professionals to acquire an ultrasound image, anywhere in the world, and have it instantly “read” by a DLS algorithm. DLS could also prompt the user to make slight angulations or rotations in the position of the probe during the exam to produce a better image. Of the current imaging modalities ultrasound is the most challenging for DLS. Part of the reason for this is that the quality of the image acquisition is primarily driven by the training, knowledge, skill set, and experience of the user. So, in addition to learning to read a clinically acceptable image, DLS will also have to learn what an unacceptable image looks like and using that knowledge, guide the user during the examination in making a more recognizable image – no easy feat.
The good news is that there is a ton of data out there waiting to be mined and melted into DLS. For example, Centers for Medicare and Medicaid Studies (CMS) data shows that over 400 million imaging procedures are performed in the United States every year. Millions of ultrasound examinations that can be captured by DLS algorithms every year and scanned for image patterns/clinical indication/patient demographic/diagnosis/patient outcome and with this data get smarter and smarter. Data driven clinical diagnosis and patient management decision making processes can, at a minimum, enhance the physician’s confidence level in their ultimate course with the patient. DLS has a home in diagnostic ultrasound, it is not a question of “if”, but “when”.
There is a lot of work to do with creating and deploying a practical implementation of DLS in the ultrasound arena, but a great deal of work is currently being done with this modality. A friend of mine Mark Michalski, MD is the Director of Clinical Data Science at the Massachusetts General Hospital and Brigham and Womens Hospital Center for Clinical Data Science in Boston, and his team is making substantial strides in this area of study and development. 2017 will witness the beginning of the convergence of Moore’s law regarding computational power with the rise of ever better DLS algorithms nudging us ever closer to a new and amazing medical imaging paradigm; the actualization of AI in medicine. Buckle up, it’s going to be an interesting ride.
About the Author, G. Wayne Moore:
A 30-year veteran of the diagnostic ultrasound market Wayne has held senior level positions with several major medical equipment manufacturers, including Honeywell Medical Systems and Siemens Medical Solutions. Wayne has been directly involved in the development and commercialization of more than 15 technologically intensive ultrasound systems. He is widely published in diagnostic ultrasound literature, a sought after speaker at medical imaging conferences, has served as an expert witness in multiple ultrasound litigations, and holds more than 16 United States ultrasound related patents. Wayne obtained his MBA from the University of Denver – Daniels College of Business.
He was elected as a Fellow of the American Society of Echocardiography (FASE) in 2009.