Data Privacy and Artificial Intelligence in Healthcare

March 17, 2022 – The use of artificial intelligence (AI) in healthcare delivery continues to grow rapidly. Access to patient medical data is often central to the use of AI in healthcare delivery. As the exchange of medical information between patients, doctors and the healthcare team via AI products increases, protecting an individual’s information and privacy becomes even more important.

The expanded use of AI in healthcare has brought increased attention to the risks and safeguards for the privacy and security of the underlying data, leading to increased scrutiny and enforcement. Entities that use or sell AI-enabled healthcare products should consider federal and state laws and regulations applicable to the data they collect and use and that govern the protection and use of patient information and other common practical issues facing AI-based health. care products.

Below, we outline several important privacy and data security issues that must be considered when creating AI-based products or deciding to use such products in the delivery of healthcare. .

Join now for FREE unlimited access to Reuters.com

Register

Use of data by anonymization

The collection and use of patient health information in AI products may inevitably involve the Health Insurance Portability and Accountability Act (HIPAA) and various national privacy and security laws and regulations. It is important for AI healthcare companies as well as institutions using AI healthcare products to understand whether HIPAA or other state laws apply to the data. One way to potentially avoid these regulations may be to anonymize the data before it is uploaded to an AI database.

What it means to be anonymized will vary depending on the laws and regulations applicable to the data used. For example, if patient information is covered by HIPAA, anonymizing protected health information (PHI) requires the removal of certain identifiers or an expert determination that the data can be considered anonymized. Even if the data is initially anonymized according to the applicable standard, AI products present unique challenges to the anonymization process.

As an AI product develops and expands, new data elements are often added to the AI ​​system or the amount of data in a particular element increases, creating the potential for data issues. confidentiality. In some cases, the additional data is collected to address potential algorithmic biases in the AI ​​system, as a marketable AI product should be considered reliable, efficient, and fair.

AI-based products create additional privacy concerns, especially when anonymized data is used to attempt to address potential bias issues. As more data is added to AI systems, the potential for creating identifiable data also increases, especially as the increased sophistication of AI systems has made it easier to create data links where such links did not exist before. As the amount and number of data elements increase, it is important to continuously assess the risk of AI systems generating identifiable patient data where it has already been anonymized.

Vendor due diligence — data access, data storage, and ransomware

It is essential to perform sufficient vendor due diligence before entrusting a third party with patient data, including their PHI. How the data is collected (for example, directly from patient records) and where it is ultimately stored create two important due diligence points. In either case, failure to conduct proper due diligence can result in legal and monetary consequences.

In the case of collection, entities granting access to the system to collect data not only face legal requirements, but also potentially significant liability if the data is not properly protected. AI technology is just as vulnerable to manipulation as any other technology, and the networks connecting patient data to patient care must be secure. In this time of increasing ransomware attacks and increased focus by attackers on healthcare, all external access points should be thoroughly vetted and monitored to limit these potential threats.

In particular, reviewing how an entity manages the management of data access and ensuring that the entity institutes high-level data governance and management to protect and manage the processing of patient data should be the norm in all due diligence efforts related to AI products. Another critical element that is often overlooked is the addition of a high-level risk assessment and potential risk mitigation efforts to determine if the potential vulnerabilities outweigh the risk of accessing such a product.

Healthcare entities should critically consider, without presuming, whether direct access is the only means by which the AI ​​product can function or bring value to the healthcare entity or whether it cost-effective alternatives exist, such as a separate database of information that the entity pulls and populates in direct access to the main system.

In the case of storage, AI healthcare companies should consider these same due diligence issues as any obligations of the company will generally need to be passed on to its suppliers. Often these companies may not view these vulnerabilities as high risk and would prefer to spend their limited funds elsewhere. But if the supplier is not performing as it should, the reputational risk of that failure could arise and prove that ultimately the cost of such due diligence is a better investment if the alternative is to make your unsaleable product due to reputational damage.

Security Measures to Protect Healthcare Data

Privacy cannot be achieved without security. Appropriate security measures should be adopted to maintain confidentiality and build trust in the technology. Security measures that AI companies should consider are:

Improved compliance monitoring: Information systems and data should be audited and monitored regularly to determine if any data is compromised. There are many affordable third party products available today to assist with this monitoring and should be considered as part of any information security program.

Access controls: It is essential to understand who will have access to data and algorithms and to ensure that strict controls appropriate to the level of access are provided.

Training: Staff and suppliers should be made aware of their access limitations, data use limitations, and data security obligations. In particular, this should include any limitations found in patient consents or authorizations.

Conclusion

There is much excitement about the benefits that AI technologies can bring to healthcare. Protecting data privacy is an important part of ensuring the long-term use and success of these AI products. Without the confidence of patients and physicians that these AI-based products take into account and preserve the privacy of patient data, these advances could be short-lived.

Join now for FREE unlimited access to Reuters.com

Register

The opinions expressed are those of the author. They do not reflect the views of Reuters News, which is committed to integrity, independence and freedom from bias by principles of trust. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Jason Johnson

Jason Johnson is a partner at the firm in the Health, Privacy and Cybersecurity, and Intellectual Property practice groups. He focuses his practice on the legal aspects of digital health innovations, data privacy and security under US and European laws, and complex regulatory and compliance issues related to clinical research and commercial matters. Its customers include academic medical centers, health technology companies, emerging or late-stage biotechnology companies, pharmaceutical and medical device companies, and other healthcare and research-related organizations. He can be contacted at [email protected]

Pralika Jain

Pralika Jain is a partner in the firm’s healthcare practice. She advises companies at all stages of their life cycle, from formation to investment, exit, construction and protection of intellectual property, and provides strategic advice to founders and investors in the context of complex transactions. As an advisor to companies and institutions in the health and technology sectors, she regularly advises on issues related to privacy and data laws in all jurisdictions. She can be contacted at [email protected]

Linda A. Malek

Linda A. Malek is a partner at Moses & Singer LLP and chair of the firm’s Health, Privacy and Cybersecurity practices. His practice focuses on regulatory, technological and business issues in the healthcare industry. She can be reached at L [email protected]

About Leah Albert

Check Also

Biden will criticize Republicans as having no plan on inflation

U.S. President Joe Biden arrives to deliver a speech on expanding high-speed internet access, during …