研究所、轉學考(插大)、學士後-英文題庫下載題庫

上一題
       Artificial intelligence (AI) has the potential to transform healthcare decision-making but also introduces novel challenges in patient safety. AI-embedded clinical decision support (CDS) can improve diagnosis, including the identification of rare diseases, and offer higher-value treatment options. However, it can also create harm. For example, AI performance may change when applied to different populations, instead of those originally tested, potentially leading to incorrect diagnoses or treatments for certain demographic groups. Additionally, improper training can result in clinicians misusing AI, thus endangering patient safety. Moreover, implementation of new healthcare technology can improve safety but may also increase medical errors. Likewise, traditional decision support systems have resulted in alert fatigue, leading to medical errors. However, medical education has been lacking in training individuals on integrating AI algorithms into medical decisions. Thus, implementation science and quality improvement programs are required to emphasize the importance of developing plans and using simulation to mitigate potential harms.
        The National Academies of Medicine (NAM) defines patient safety as “the prevention of harm to patients.” Despite two decades of focus, the effectiveness of patient safety efforts remains debated among experts. On October 30, 2023, President Biden issued an executive order on AI, mandating federal agencies to develop standards for AI applications in healthcare. The Department of Health and Human Services (HHS) created a task force to ensure that AI deployment reduces patient harm and encourages continuous learning. This includes roles for the US Food and Drug Administration (FDA) in software approval, the Office of the National Coordinator (ONC) for AI inclusion in electronic health records, and the Office for Civil Rights (OCR) to ensure AI algorithms not violating civil rights. 
       The requirement for hospitals to ensure patient safety is a condition of participation (CoP) in Medicare and Medicaid, as by the Centers for Medicare & Medicaid Services (CMS). Section 1861(e) of the Social Security Act authorizes the Secretary to impose additional requirements if necessary for health and safety. This involves investigating harms to determine if policies and procedures effectively protect patients and whether these measures minimize harm while maximizing safety. CMS, State Survey Agencies, or Accrediting Organizations investigate reports of abuse, neglect, or noncompliance with health and safety standards. They also investigate critical events such as unexpected deaths or serious injuries. Hospitals are obligated to conduct a Quality Assessment and Performance Improvement (QAPI) activity if harm occurs. Although there is no separate statutory authority to regulate AI in clinical care, CoPs for hospitals already require policies and procedures for AI use, detailing qualifications and responsibilities of users and those monitoring safety issues. Principles such as safety, transparency, accountability, equity, fairness, and usefulness should guide AI and governors to ensure trustworthy solutions in patient care. 
       The Biden-Harris administration’s Executive Order calls for national standards for trustworthy AI, developed through public-private partnerships. Local AI governance should provide organizational transparency on which AI solutions are used on which patient populations to avoid safety issues and inconsistent use. When organizations do not have the appropriate technical expertise to assure that AI is used appropriately, they can rely on independent entities such as the proposed assurance laboratories.
       The CoPs also mandate governance structures to monitor safety events. When patient harm is reported, the hospital should determine if the patient was harmed through a medical error or had a poor outcome and whether the application of an AI tool or algorithm was a contributing factor in the harm a patient experienced. Although new AI regulations have been suggested, the CoPs already empower CMS and accrediting organizations to regulate AI at the bedside. If AI is a potential cause of harm, hospitals must identify if the issue lies with the algorithm, hospital policies and procedures, or staff training.
       CMS can investigate hospitals and require corrective action plans if their processes and procedures do not protect patient safety. If an error is due to an intrinsic algorithm flaw, safety incidents, including non-harmful errors, should be reported to the manufacturer, with risks managed by the implementer. Poor implementation issues must be addressed through the QAPI process, and safety risks reported to the manufacturer. FDA-cleared AI technologies require medical harm reporting to the FDA and manufacturer. As for non-FDA-cleared AI technology, it will be important for the health care ecosystem to think about the mechanism to report AI-influenced medical errors, with QAPI findings reported back to the FDA and manufacturer. CMS and HHS must use their existing authority under the CoPs to ensure safe AI implementation in hospitals, with algorithm assessment left to the FDA and other bodies. While AI has the potential to improve patient outcomes and care, the critical goal is to employ AI in enhancing safety, not in creating new sources of medical harm without a clear mechanism for continuously improving and learning from any medical errors.

【題組】50. What would be the most suitable title for the article?
(A) The Evolution of AI in Clinical Diagnostics
(B) AI and Patient Safety: Opportunities and Risks
(C) Traditional Decision Support Systems in Healthcare
(D) FDA Regulations on AI Technologies


答案:登入後觀看
難度: 計算中

10
 【站僕】摩檸Morning:有沒有達人來解釋一下?
倒數 5天 ,已有 1 則答案
毅誠 大二上 (2024/08/27):

50. What would be the most suitable title for the article?

 (A) The Evolution of AI in Clinical Diagnostics 不只提到發展,有更好的選項
 (B) AI and Patient Safety: Opportunities and Risks
 (C) Traditional Decision Support Systems in Healthcare不只提到傳統的決策模式,還有醫院,錯了
 (D) FDA Regulations on AI Technologies不只提到fda 監管,不選
0個讚
檢舉


       Artificial intelligence (AI) has ..-阿摩線上測驗