Human Visual Acuity
The Human Eye versus AI
G. Wayne Moore, B.Sc., MBA, FASE
Medical images are often comprised of both subjective and objective interpretation components. This is true because each type of imaging modality has strengths and weaknesses ranging from the breadth of clinical use to what humans perceive as image resolution. Medical imaging devices such as MRI, CT, X-Ray, and Ultrasound, all have legacy architectural designs that are predicated on the ability of the system being used to acquire, process, and display an image in such a manner as to allow a properly trained physician to readily visualize an anomaly and from that observation make an accurate diagnosis and begin treatment.
Contemporary medical imaging devices are capable of producing images that are at least an order of magnitude higher in resolution than the same system types in the early 2000s. Display monitors over this time frame have also dramatically advanced in performance as the industry transitioned away from CRT’s to digital displays. CRT display monitors had rather limited performance parameters, for example a narrow range of grayscale – i.e., 8-bit or 256 levels of gray, requiring a great deal of image compression to fit as much useful information as possible across this relatively small range. The human eye is also not very good at seeing small variances in gray (we are much, much better at resolving color), with various studies over the years showing we max out at resolving somewhere between 16 to 32 distinct levels of gray. Additionally, with grayscale images we are also prone to optical illusions (see below) and can miss image variance subtleties. With the advent of LCD displays the gray scale range went from 8-bit (CRT) to 14-bit and more, with a 14-bit display producing a range of 16,384 levels of gray – well beyond human visual acuity1. Unfortunately, because human visual acuity has not evolved over that same time frame, we have dramatically exceeded the limits of what the trained human eye can actually “resolve” in an MRI, CT, or ultrasound image. Therefore, a new paradigm that does not rely on human visual acuity must be adopted.
It is because, in part, of these visual acuity limitations that AI may prove to be a far superior method over human eyesight in analyzing data from MR, CT, and other modes. In short, AI does not need a monitor to look at and it has no visual acuity limits; it has unfettered access to the raw data. I believe that future imaging systems will be designed in such a manner as to provide all acquired data without image compression or other limiting processing techniques to an AI/ML/DNN algorithm; an algorithm with virtually unlimited resolution, incredible speed, and with high diagnostic accuracy. It will happen, it is just a function of time.
This optical illusion below relies on the insensitivity of our eyes to shades of grey.
The horizontal grey bar is actually the same shade throughout, and you can check this by covering the area surrounding the central bar
Next month we will look at the medico-legal complications of such a new diagnostic paradigm.
1 Powis, R, PhD, 1983 – Ultrasound Physics for the Fun of it – a Technicare Publication
About the Author, G. Wayne Moore:
A 30-year veteran of the diagnostic ultrasound market Wayne has held senior level positions with several major medical equipment manufacturers, including Honeywell Medical Systems and Siemens Medical Solutions. Wayne has been directly involved in the development and commercialization of more than 15 technologically intensive ultrasound systems. He is widely published in diagnostic ultrasound literature, a sought after speaker at medical imaging conferences, has served as an expert witness in multiple ultrasound litigations, and holds more than 16 United States ultrasound related patents. Wayne obtained his MBA from the University of Denver – Daniels College of Business.
He was elected as a Fellow of the American Society of Echocardiography (FASE) in 2009.