Enterprise Imaging: Not Business as Usual

A common complaint is the difficulty in getting traction for Enterprise Imaging (EI) Systems in health care systems, especially in IT departments. Three issues arise when the topic of EI comes up, usually in the following order. First is that EI is just another project. Estimate the cost and the timeline, and it may get on the approved list. Next would be what is the ROI or why should we do this? The last hurdle may be a stipulation to use the existing imaging resources for EI.

Image from: A White Paper Foundation for Enterprise Imaging: HIMSS-SIIM Collaborative Roth, C.J., Lannum, L.M. & Persons, K.R. J Digit Imaging (2016) 29: 530. https://doi.org/10.1007/s10278-016-9882-0

Enterprise imaging is not another project but an enterprise strategy. EI puts all of the images generated in the health system into an indexed, digital archive. Images can be immediately accessed by all clinicians with relevant information on how, when, and where the image was acquired.

An assessment of image generating departments can illustrate the magnitude of an EI program. Finding 60 plus departments acquiring images employing hundreds of devices is common. EI is not just another radiology or cardiology PACS but is an enterprise strategy and a commitment to 60 plus or more imaging projects over a period of years.

Enterprise imaging is a value proposition to both the image generating departments and to the enterprise as a whole. Storing images with appropriate documentation allows these exams to be billed recovering lost revenue – a benefit to the department and the enterprise.

Having the exams indexed in a central archive reduces cost by eliminating manual searches by departmental staff to retrieve images. These images are also viewable in the EMR by appropriate clinical staff which benefits the enterprise.

Moving images from departmental silos increases security avoiding potential breaches and fines. Cost avoidance benefits the enterprise and the department. The increased revenue, decreased cost, and cost avoidance can be quantified for each department that moves to the EI system.

The HIMSS Analytics Electronic Medical Record Adoption Model (EMRAM) was modified to include patient centric storage of non-DICOM images in Stage 1 on January 1, 2018. The HIMSS Analytics Digital Imaging Adoption Model (DIAM) addresses Enterprise Imaging with an 8 stage model. It is on track to be finalized by the end of 2018.

Most US Hospitals participate in the EMRAM program.  All will need to address EI to continue in the future.

Enterprise imaging is not another PACS project or a series of PACS projects. Nor is EI a radiology project. The work flows and the types of images do not follow the PACS paradigm. Often images are taken at the point of care with no orders with visible light devices that are not DICOM compatible. A separate Enterprise Imaging team and governance structure are needed to manage and implement these projects. Successful EI strategies usually have strong C-Suite support.

The governance council will determine the EI roadmap, set priorities, and criteria for selecting candidates to move to the EI system. The council should include members from IT, operations, cardiology, radiology, and representatives from the other o’logis.

The governance council membership will evolve as the EI strategy progresses and other departments are brought onto the EI system. Initial members from the other o’logies are often departments with the greatest interest and/or greatest needs. Some health systems require a clinician champion from the department prior to starting a project to include it on the EI system.

Understanding the breadth of an EI strategy, the overall value to both the departments and the enterprise, and the importance of governance are crucial to the success of Enterprise Imaging.

Radiology: The FDA and Artificial Intelligence

Out of the dozens of AI companies in radiology, eight have apps cleared to market by the FDA (as of March 18, 2018). Almost all of these with FDA clearance have successfully applied via the FDA 510(k) premarket notification process by identifying a similar device already cleared to market by the FDA, the “predicate device”.

One company has been FDA cleared to market by using the De Novo premarket review pathway. The De Novo process provides a pathway to classify novel medical devices for which controls proved reasonable assurance of safety and effectiveness for the intended use but for which there is no marketed predicate device. De Novo is a risk-based classification process. Those devices that are classified into class 1 or II may be marketed and used as predicates for future premarket notification (FDA 510(k)) submissions. The De Novo classification process was created in 1997 and expanded in 2012.

The FDA has been struggling to figure how to handle AI apps. AI apps, especially those employing deep learning neuro networks, have two major issues for the agency. One problem is the lack of transparency since it cannot be determined how the AI app reached its conclusion. The other issue is that the AI app is dynamic since it keeps learning. The app that was shipped on day 1 to hospital A will not be the same app on day 5. In addition, that app will also be different that the one shipped to hospital B since both are learning on different data.

On July 27, 2017 the FDA published its “Digital Health Innovation Action Plan”. In this plan, the FDA recognized that its traditional approach to moderate hardware-based medical devices is not applicable for the faster iterative design, development, and validation methods employed for software-based technologies. The FDA also thought that traditional implementation of premarket requirements might impede or delay access to evolution of software products.

In the “Digital Health Innovation Action Plan”, the FDA laid out a trial program to “pre-certify” digital health developers who demostrate operational evidence that they excel in software design, development, and testing. Pre-certified developers could qualify to market their lower-risk devices without additional FDA review or with a more streamlined premarket review.

A Pre-cert pilot program was also launched on July 27, 2017 with the Digital Health Innovation Action Plan. On September 26, 2017 the nine companies selected to participate in the Pre-cert pilot program were announced as: Apple, Fitbit, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Samsung, Tidepool, and Verily. Over 100 companies expressed interest in the program. The FDA provides status updates to the pre-cert pilot and has published the Software Precertification Program Model.

The FDA also announced three new guidances, two draft and one final, addressing provisions of the 21st Century Cures Act about where the FDA doesn’t need to be involved and its role where there is a need for FDA involvement. The draft guidance on “Clinical and Patient Decision Support Software” outlines the FDA’s approach to Clinical Decision Support (CDS) which is germane to many of the AI applications in radiology.

The CDS draft guidance is intended to make clear what types of CDS would no longer be defined as a medical device, and thus would not be regulated by the agency. For example, generally, CDS that allows for the provider to independently review the basis for the recommendations are excluded from the FDA’s regulation. This type of CDS can include software that suggests a provider order liver function tests before starting statin medication, consistent with clinical guidelines and approved drug labeling.

However, the FDA will continue to enforce oversight of software programs that are intended to process or analyze medical images, signals from in vitro diagnostic devices or patterns acquired from a processor like an electrocardiogram that use analytical functionalities to make treatment recommendations, as these remain medical devices under the Cures Act. These are areas in which the information provided in the clinical decision software, if not accurate, has the potential for significant patient harm, and the FDA has an important role in ensuring the safety and effectiveness of these products.

Interestingly these FDA documents make no mention of artificial intelligence. Nor do any of the FDA 501(k) summaries of AI applications that have been cleared to market. However, the FDA does address AI in the announcement of the Viz.AI Contact application that was cleared to market under the De Novo process. The Contact application is a CDS package that analyzes CT scans and other information to notify providers of a potential stroke patient.

Another company has entered the De Novo process for an AI application. IDx filed its De Nono application in early February, 2018 for its AI-based system for the autonomous detection of diabetic retinopathy.

The Viz.AI announcement stated that “The FDA is currently creating a regulatory framework for these products that encourages developers to create, adapt and expand the functionalities of their software to aid providers in diagnosing and treating diseases and conditions.” This may be a combination of the CDS Draft Guidance and the Pre-cert program. The FDA has included AI in its definition of Software as a Medical Device (SaMD) which is being addressed by the pre-cert program model. Specifically addressing AI by the FDA is currently a “works in progress”. In the meantime, the De Novo process may become the path for clearance of new AI apps by the FDA.

Artificial Intelligence at RSNA 17: Some Observations

Artificial Intelligence (AI) continued to be a hot topic at the Radiology
Society of North America 2017 annual meeting. It was reported that RSNA 2017 had four times the number of AI sessions as there were at RSNA 2016. Two sessions that I attended were standing room only with attendees turned away prior to the start of the papers.

One of the new introductions at this meeting was the AI imaging distribution platform. EnvoyAI (a TeraRecon company) announced its “EnvoyAI Exchange” where end users can buy access to FDA 510 (k) cleared AI algorithms and where developers can test and refine their products. At RSNA 2017, 3 algorithms on the Exchange were FDA 510(k) cleared and available for purchase. A total of 35 algorithms from 14 developers were on the exchange in various stages of development.

Nuance had a Work In Progress demonstrating its “AI Marketplace” that integrated AI applications from multiple vendors into its PowerShare image sharing platform. The marketplace will host multiple AI applications that can be selected by the user to run on specific exams in PowerShare and have the results auto populate the PowerScribe reports. Commercial availability is yet to be determined.

Siemens is adding AI applications to its “Digital Ecosystem” platform and announced Arterys as a partner in February 2017. Arterys has received FDA 510K clearance for its web-based imaging interpretation platform, MICA that supports interactive AI imaging applications. The Arterys MICA platform currently offers an AI assistant for cardiac MR image analysis that is FDA 510K cleared and has lung and liver analysis solutions pending FDA clearance.

Most PACS vendors were either using 3rd parties for AI application in addition to or in lieu of, their own development. Partnerships announced as of the end of RSNA 2017 were as follows.

As of 12/28/2017, companies in the above list with FDA 510(k) clearance to market for some of their AI applications, are: Arterys, DiA Imaging Analysis(formerly DiACardio), Imbio, and RadLogics.

Presenters cautioned that deep learning neural networks are very large models that are difficult to train. Large amounts of annotated and diverse data sets are required. The models are not transparent meaning that one cannot see how the results are obtained. Validation of these models is nontrivial and will need to be done at multiple sites by clinicians. Overcoming these challenges will take time.

Developing an Enterprise Imaging System Plan: First Steps

Looking beyond radiology and cardiology to build a system to collect, index, manage and communicate images of all types throughout the enterprise is a large and complex undertaking. Key steps in developing a plan are to size the project, establish a governance structure, and lay out a roadmap. Determining the magnitude of the Enterprise Imaging Project provides information that will aid in developing the strategy, the governance structure, and the roadmap.

To size the project first identify the image generating departments in the enterprise such as respiratory care, otolaryngology, ophthalmology, pathology, dermatology, wound care, sleep labs, and many others. One institution identified 60 image generating departments, some at the out set of the project, and others during the course of the project. Even now, years into the project, and with half of the departments on the enterprise system, they report that new requests keep coming in.

For each of these departments, basic information about the images needs to be collected.
   • Image type: monochromatic/color; still; motion; motion with audio; motion with      waveforms
   • Image sizes and frame rates (if motion)
   • Acquisition device: Technology; manufacturer, model
   • Where images are acquired: location; mobile
   • Current storage technique and device: digital, paper, thermal printer, analog video,      film, or none
   • Image formats: standards employed
   • Study sizes
   • Study volumes
   • Associated meta data: currently acquired; needed to be acquired
   • Archive requirements: medical and legal; all images or key images; how long
   • Viewing needs: identify the users; specialized processing needs; mobile needs

In addition, the needs of each area for image sharing should be identified, both import and export of images, to and from the enterprise.

Once the generators and consumers of the images have been identified, a governance structure can be proposed. This step can be delicate since it is moving imaging beyond cardiology and radiology to encompass the enterprise and may involve silos of image storage in other departments as well. To succeed, the governance structure needs to be broader than a single, image-generating department. Many organizations have established an Enterprise Imaging department and pulled in personnel from existing PACS support teams to staff it. A multi-disciplinary physician advisory group is important to provide guidance to the program and to help communicate the program to the enterprise before and during roll-outs.

Criteria for an implementation roadmap should be established and reviewed with the governing body. Consideration needs to be given to the enterprise strategic goals, the volume and nature of the requests for enterprise image access, and plans to replace existing PACS equipment. Often a first step in the roadmap is to bring in the established enterprise disciplines of radiology and cardiology.

The next departments to be added to the Enterprise Imaging system, may be the low-hanging fruit, that is those that are already digital, interfaced with ADT, and maybe even DICOM compatible. Some enterprises have also required a physician champion in a department before adding that department to the roadmap. The roadmap is not usually complete at the outset of the project and evolves as the project progresses. One large healthcare system which has been developing its enterprise image system for several years has reported that no one can say when they will be done. As the project progresses, new places emerge with image storage and communication needs.

Once the strategy is in place, the size of the project estimated, and a roadmap in place, work flow analyses can be examined, indexing strategies started, system architectures proposed and analyzed, schedules and budgets developed for the initial phase. Communication and promotion of the Enterprise Imaging System can proceed. Integration of the Enterprise Image system with the EMR broadens access to all images and will benefit providers and quality of care both inside and outside the enterprise.

Look at the Dark Side of the Cloud Before Using it for Archiving Images

Introduction
The attractiveness of the economy of scale of cloud services has drawn many health system CIO’s attention for some time when looking at medical image storage. Now that Enterprise Image Archives are coming, CIO interest in the cloud has increased as has the number of companies offering cloud services to healthcare. When considering the cloud, it is important to look at the risks associated with the cloud and take measures to mitigate these risks.

Security concerns about the cloud have prevented many healthcare organizations from signing up, and the new HIPAA rules make security an even bigger issue. Moving to the cloud can also mean giving up control of the image data since it is on someone else’s hardware. Service outages are another issue to be aware of and retrieving the data upon termination of the service can be problematic as well. Current users of the cloud have run into all of these problems. Healthcare providers can take advantage of their experiences.

Service Outages
All cloud services experience outages, and often Service Level Agreements(SLA’s) are carefully written to exclude specific portions of the hardware and software to limit their liability.  In the first 3 months of 2013, Microsoft, Google, and Amazon, all of which offer major cloud storage services, had significant outages.

Microsoft’s Azure cloud storage service went down for 12 hours in February, 2013. The Google Drive cloud storage was down for 17 hours in March, 2013.  Amazon Web Services was down for almost an hour in January, 2013.  In December, 2012 the Amazon service was down for 24 hours.  In total, Amazon had 4 multi-hour outages in 2012.

In most cases, all the data affected in these outages was recovered.  Although recovered, the data was often unavailable for sometime after the outage.

Reasons for the outages vary.  Often the outage is due to an update of hardware or software in the network or servers that went awry.  Hardware failures also occur as data centers are pushed to ever increasing power densities.

Users of the cloud service for data storage need to have contingency plans for outages.  The cloud service services offer redundant storage options, including storage in multiple data centers or availability zones. Not only do these options come at an additional cost, they are not fail safe either.  Sometimes the switch over to the other center either takes some time to occur or doesn’t happen at all.

Data Loss
Losing control of one’s data can lead to losing the data as well. Millions of users of Megaupload’s file sharing service found this out in January, 2012 when the FBI shutdown the Megaupload web site and seized the servers leased by Megaupload from a cloud hosting service in Virginia.

The servers were seized and the site shut down due to copyright violations involving music and movies stored on the servers. The fact that millions of files were legitimate did not matter, since they were commingled with the pirated files and could not be separated out.

As the case meandered through the justice system, the files remained frozen. The Dutch hosting service for Megaupload had never received a request to save the data. Thus, in February 2013, the Dutch hosting service decided to re-provision 630 servers and deleted all the Megaupload data.

In the United States, the Department of Justice established a process for users to regain their data. It was so onerous and lengthy that few users were able to recover their data. As of October 2013, the hosting service in the US was told that the files were no longer needed and could be destroyed. However, the data could not be returned to its legitimate owners, even though an independent analysis demonstrated that the majority of the files were not pirated.

The Megaupload users learned that putting data into the cloud means losing control of the data. They had no control or knowledge of where it was physically stored or what other data was on the same servers. In the end, they no longer even had access to the data.

Amazon, Microsoft, and other major cloud service companies develop and control their own data centers for their cloud services. In addition, to maintain growth and handle spikes in demand, Amazon, Microsoft, and other companies lease additional capacity from other hosting services. The ultimate owner of the hardware has the most control of the data and it is important to know who this is. The practice of leasing has implications for HIPAA compliance as well.

Security
Two of the motivating factors behind the development of the Internet by ARPA were to have a decentralized network and to enable resource sharing. Any two servers on the network could connect over multiple paths as opposed to a single, fixed point connection. Any attack that took out one path would not disrupt the communication.

As data and services move to large cloud services, the Internet is being decentralized. One of the effects of this centralization is that there are fewer points of failure and one cloud service having an outage can bring down dozens of web services.

The cloud presents fewer and richer targets for hackers. In March 2013, Evernote was hacked, and the user names, emails, and encrypted passwords of all the users were accessed. In 2012, Dropbox, a file sharing and backup was similarly hacked. One of the more extreme examples, was in 2011, when Sony’s PlayStation Network had 77 million accounts compromised.

For healthcare providers considering the cloud for medial image storage, the new HIPAA rules, enforced as of September 23, 2013, make security an even greater concern. The healthcare provider and the HIPAA Business Associate are both responsible if the Business Associate fails an audit or commits a breach. Over 20% of the reported data breaches since 2009 have been caused by Business Associates.

In addition, providers are responsible for ensuring that any subcontractors a HIPAA Business Associate uses are also compliant. Thus, the cloud vendor’s data center must have a risk assessment and be able to pass a HIPAA security audit and so should any hosting service that the cloud vendor employs. As part of investigating cloud storage, healthcare providers need to know the locations of all data centers employed, the company owning the servers, the company operating the servers, and examine the security risk analysis done by each entity.

The security risk analysis must be kept current. This means that any change in the systems storing or transferring the images by the cloud vendor or its subcontractors and their subcontractors requires an update to the to the security risk analysis for any changes in risks.

Data Migration
There are data migration implications in the cloud just as anywhere else. Someday one may wish to change services or leave the cloud or as happened recently, the cloud could leave you.

Nirvanix was the cloud hosting company behind IBM’s SmartCloud Storage service, among other services. In mid September 2013, Nirvanix told its customers that due to a failed funding round, Nirvanix would be closing by the end of the month and customers should migrate their data in two weeks. IBM was not commenting and Aorta Cloud, another large company using the Nirvanix service, announced that it had contingency plans for its clients but could not help other large Nirvanix customers.

Nirvanix ended up staying open until October 15th. Nirvanix partnered with IBM, CoreSite, and HP to get the data out and offered customers the option of either returning their data or transitioning to another service such as Amazon, Microsoft, or Google. No official notice was given on how long the partners could keep the Nirvanix servers up and data transfer going.

The more common need for data migration is to change services or move to a different storage paradigm. A common practice is for the user to transfer all the data prior to terminating the service. Downloads are charged per gigabyte transferred. To speed up the process, some cloud services offer to bypass the Internet by either transferring to a portable storage device or offering a high speed direct connection, at additional charges.

A cloud user’s data may be deleted immediately upon termination of the services. It is important to recover all data prior to termination, and it is equally essential to have the data format, transfer method, and time frame agreed upon in the SLA. There have been reports of data being returned encrypted on media that required special hardware to read.

Performance
With 1 Gbps and 10 Gbps networks common in healthcare systems, accessing images over the Internet will not be as fast as accessing images over the internal network. A PACS system getting a prior exam from the local cache may take a few seconds. Getting the same exam from the cloud is dependent on the provider’s connection to the Internet and how much of the bandwidth is available, that is how may other applications are accessing the Internet at that time. It also depends on the Internet latency which is a function of the path the data takes. The same exam could take minutes to transfer instead of seconds.

Amazon Web Services offers a direct connection to the cloud that bypasses the Internet. Options are available up to 10 Gbps at a per hour connect cost and a per GB transfer cost.  Even with the direct connection, the exam transfer will still be slower from the cloud than on site, depending on what format conversions are necessary in the cloud servers.

Conclusion
A business continuity plan should be in place to assure that images required for priors or for use in procedures are available during a cloud outage. This plan may include increasing the size of the onsite image cache. To determine the size of the cache, each discipline using the images must determine how long image access is absolutely essential. For radiology it may be a period of years while for wound care, it may be a period of months. The size of cache should also consider performance issues. That is, how old can the data be such that each discipline can afford to wait for the images to be retrieved and how long can they wait.

To protect against data loss, a disaster recovery system should be in place. This could employ a different cloud vendor but one should be careful to know that the two could vendors are not sharing same hosting service. A better approach may be to use a data center that is off site and under control of the health system. The disaster recovery system may be planned to avoid issues associated with data migration, should one decide to change cloud vendors or if the cloud leaves you suddenly.

Security issues will require the provider to carefully vet the cloud service and negotiate theSLA. The cloud service needs to be HIPAA compliant with the HIPAA regulations as they are now – not as the HIPAA regs used to be. Ask the cloud service how often they update their security analysis and if the answer is based upon the calendar, e.g., once a year, there is a major problem. At this point the healthcare provider needs to assess how much time they can spend in educating the cloud service and/or if the provider should be considering alternatives.

The process for notification of breaches needs to be carefully spelled out in the SLA. The healthcare provider is responsible for notifying patients within 60 days and needs time to do so. This may mean that the cloud service needs to notify the healthcare provider within 10 days.

Once all these measures are planned, the costs and risks associated with archiving medical images in the cloud may be reconsidered. It may not be as inexpensive as first thought. Contingency plans and well written SLAs are a must.

References

Outages
The worst cloud outages of 2013 (so far)
Cloud Computing – Outages: Analysis of Key Outages 2009-2012
The Most Recent Amazon Outage Exposes the Dark Side of the Cloud
Amazon Cloud Outage Kos Reddit, Foursquare & Others

Data Loss
More than 10 million legal files snared in Megaupload shutdown
LeaseWeb explains why it deleted Kim Dotcom’s MegaUpload data
The Dark Side of the Cloud: IBM Partner Gives Fold Two Weeks to Move Data
Nirvanix Shut-Down Sends Shockwaves through the Cloud Services Industry

Cloud Leasing Practices
Microsoft Accelerates Its Data Center Expansion
Cloud Builders Still Leasing Data Center Space

Migration
A Dark Side of the Cloud: Breaking Up is Hard to Do

Security
HIPAA Business Associate Myths & Facts
10 Myths about HIPAA’s Required Security Risk Analysis
New HIPAA rule could change BAA talks

Contracting
The Dark Side of the Cloud: How to Avoid the Pitfall of Cloud Computing Contracts for Your Business