In this study the use of artificial intelligence in medicine is generating great excitement and hope for treatment advances. AI generally refers to computers’ ability to mimic human intelligence and to learn. For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment. They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.
But AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming article; so they argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.
AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data setscan come from electronic health records and health insurance claims but also from several surprising sources. AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.
Multitude of medical conditions
Researchers are already using AI to predict a multitude of medical conditions. These include heart disease, stroke, diabetes, cognitive decline, future opioid abuse and even suicide. This predictive capability of artificial intelligence; raises significant ethical concerns in health care. If AI generates predictions about your health; so they believe that information could one day be include in your electronic health records.
Anyone with access to your health records could then see predictions about cognitive decline or opioid abuse. Patients’ medical records are seen by dozens or even hundreds of clinicians and administrators in the course of medical treatment. Additionally, patients themselves often authorize others to access their records: for example; when they apply for employment or life insurance.
Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.
Artificial intelligence counselors
It is even possible that individuals will obtain health forecasts directly from commercial entities that bought their data. Imagine obtaining the news that you are at risk of dementia through an electronic advertisement urging you to buy memory-enhancing products. When it comes to genetic testing, patients are advise to seek genetic counseling so that they can thoughtfully decide whether to be test; also better understand test results. By contrast, they do not have artificial intelligence counselors who provide similar services to patients.
These concerns are exacerbate by the fact that artificial intelligence predictions are far from infallible. Many factors can contribute to errors. If the data use to develop an algorithm are flawed for instance; so if they use medical records that contain errors the algorithm’s output will be incorrect. Therefore, patients may suffer discrimination or psychological harm when in fact they are not at risk of the predict ailments.
Physicians who provide patients with artificial intelligence predictions should ensure that they are thoroughly educate about the pros and cons of such forecasts. Experts should counsel patients about AI just as train professionals do about genetic testing. The prospect of AI can over-awe people. Yet, to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.