This article was originally published on corporatecomplianceinsights.com
“Ethics is knowing the difference between what you have a right to do and what is right to do”
Is all this outcry about data regulations worth it? How much do we really care? When was the last time you pondered about ‘Terms and Conditions’ and ‘Data Permissions’ of a smartphone app or took out the time to read the ‘Privacy and Cookie Policies’ of a website? We inherently trust a multitude of websites and apps, relying on governance bodies to take care of concerning areas like information security and data privacy.
With the intertwined fates of data and healthcare, due diligence and compliance of privacy and security concerns concerning healthcare data are becoming increasingly challenging. In such a scenario, how do we ensure that healthcare providers can harness the power of AI while also adhering to ethical and legal obligations of technology and data use?
In this article, we take a quick look at some of the key concerns often faced by healthcare organizations in realizing benefits from data.
Data Privacy and Security
Discussions around security and risk implications of data are not new. While each organization claims to have ethical guiding principles around the fair use of data, a lack of a binding legal framework has made users more susceptible to data thefts, hacking, and unauthorized use. Monetization of data through advertising and third-party sharing is the chief business model of so-called digital businesses today, and this is the reason why people are skeptical about sharing personal data with organizations. These concerns need to be accentuated in healthcare since such data in the wrong hands could prove even costlier. Potentially, if health insurance companies could know about personal medical history at a granular level, mediclaim policies of the neediest section of the society, through intelligent predictions and screen-outs of potentially costly individuals, could be at risk.
There has been an ongoing debate around whether in these times of COVID-19, a contact tracing app should be made mandatory for all. Such an app could alert the user (and possibly, authorities) if there’s a likely patient in the user’s proximity. Without a doubt, the app would need access to a user’s location at all times. A few countries like South Korea and Taiwan have proved the effectiveness of a digital and data-savvy approach like this, in containing this pandemic. However, little has been done so far to address the concerns of data privacy advocates. Arguments from such groups - ranging from the possibility of data hacking to unrestricted use by governments for surveillance - pose a dilemma to policymakers. – trade-off between public health and privacy.
The underlying question leading to this discussion is – who owns a user’s data? Who governs fair-usage rights of such data? It has often been voiced that users be given enhanced active controls to govern their data. However, many patients (or more broadly, users) are often (deliberately kept) unaware of such data controls. Moreover, as often seen, such controls are almost always multi-layered and too complex for a Luddite to understand. Hence, it becomes important to have discussions about- which data can be used, how and under what circumstances. Could a governance body or global law help with these concerns, thus allowing for the free flow of valuable data, when, and where, it is most needed?
GDPR (General Data Protection Regulation) in Europe has been a step in that direction, and it is worthwhile to note that GDPR allows the use of health data without consent where it is necessary for scientific research or in the interest of public health. More awareness, transparency, and conviction around how one’s data is anonymized, trusted to good hands, and utilized for the greater good, would encourage patients to share data. Nonetheless, GDPR has set a good benchmark for data privacy and security standards in Europe (still, a work in progress), and it is expected that other countries will soon follow suit.
Fairness and Inclusiveness
While it is typically easy to obtain large, diverse, and balanced datasets in other businesses, healthcare businesses face stringent regulations and organizational barriers while collecting clinical data. AI systems trained on such sparse and biased data are bound to fail. For example, a skin cancer detection algorithm which trained on a sample of Caucasian males fails miserably when it is tried on samples of females or non-white population. Such biases are not ingrained in AI and are reinforced through unintentional personal choices/data bias, further marginalizing minority or ignored groups.
Many people might ask, who is going to benefit from this open-ended data sharing? Maybe healthcare organizations will become more efficient and cost-effective, yes. But how does it impact society at large? There are apprehensions around AI leading to loss of jobs or concentration of power and resources with a chosen few.
While AI might be perceived as an equivalent and replica of humans performing specific knowledge based and skill based tasks, so far, in most areas, it has come out only as a valuable ally and assistant by allowing humans to focus on tasks requiring creativity and intellect, as it takes up the boring, manual, and tedious tasks.
Trust and Accountability
We all love transparency, don’t we? But in pursuit of highly sophisticated and accurate algorithms, we have lagged in explainability, hence turning AI into a black-box. If all healthcare stakeholders are to buy-in on the benign story of AI, they need to understand the underlying factors responsible for decision-making or recommendations of the very AI-system they are relying upon. This not just applies to patients, but doctors as well, who are not well-versed in their interactions with AI systems and interpretations of system outputs.
At the helm of the widespread adoption of intelligent healthcare lies the question - how good is good enough? Should one trust a tumor removal surgery to a robot that has the success rate of (only) 90%? Further, in the case of an ambiguity between human and machine, who has the final say? How to quantify our levels of confidence in both systems?
While some procedures or medical cases might be straight-forward and simple enough for AI to handle, others might need human intervention. There are still a lot of such unanswered questions as to who shares the responsibility and accountability for data and intelligent systems, in case things take a turn for the worse.
The Future of Ethical AI
AI, in its intrinsic nature, is not different from a scalpel. Just like any other tool, which is intended to be used for a noble cause like surgical procedures, its other malicious use cases can’t be ruled out. To sum up, there is a need to develop new frameworks for evaluating and ensuring the transparency, safety, and reliability of AI, that range across underlying data and technology, their impact, and their limitations. The constant exercise of monitoring, validation, and review is inevitable to keep up with evolving concerns.