Artificial Intelligence for Diabetic Eye Disease

First published 2023

Diabetes is a widespread chronic condition, with an estimated 463 million adults affected globally in 2019, a number projected to rise to 600 million by 2040. The rate of diabetes among Chinese adults has escalated from 9.7% in 2010 to 12.8% in 2018. This condition can cause serious damage to various body systems, notably leading to diabetic retinopathy (DR), a major complication that affects approximately 34.6% of diabetic patients worldwide and is a leading cause of blindness in the working-age population. The prevalence of DR is significant in various regions, including China (18.45%), India (17.6%), and the United States (33.2%).

DR often goes unnoticed in its initial stages as it does not affect vision immediately, resulting in many patients missing early diagnosis and treatment, which are crucial for preventing vision impairment. The disease is characterised by distinct retinal vascular abnormalities and can be categorised based on severity into stages ranging from no apparent retinopathy to proliferative DR, the most advanced form. Diabetic macular edema (DME), another condition that can occur at any DR stage, involves fluid accumulation in the retina and is independently assessed due to its potential to impair vision severely.

Diagnosis of DR and DME is typically made through various methods such as ophthalmoscopy, biomicroscopy, fundus photography, optical coherence tomography (OCT), and other imaging techniques. While ophthalmoscopes and slit lamps are common due to their affordability, fundus photography is the international standard for DR screening. OCT, despite its higher cost, is increasingly recognised for its diagnostic value but is not universally accessible for screening purposes.

The current status of diabetic retinopathy (DR) screening emphasises early detection to improve outcomes for diabetic patients. In the United States, the American Academy of Ophthalmology recommends annual eye exams for individuals with type 1 diabetes beginning five years after diagnosis, and immediate annual exams for those with type 2 diabetes upon diagnosis. Despite these guidelines, compliance with screening is low; a significant proportion of diabetic patients do not receive regular eye exams, with only a small percentage adhering to the recommended screening intervals.

In the United Kingdom, a national diabetic eye screening program initiated in 2003 has been credited with reducing DR as the leading cause of blindness among the working-age population. The program’s success is attributed to the high screening coverage of diabetic individuals nationwide.

Non-compliance with screening recommendations is attributed to factors such as a lack of disease awareness, limited access to medical resources, and insufficient medical insurance. Patients with more severe DR or those who already have vision impairment tend to comply more with screening, suggesting that the lack of symptoms in early DR leads to underestimation of the need for regular check-ups.

The use of telemedicine has been proposed to increase accessibility to screening, exemplified by the Singapore Integrated Diabetic Retinopathy Program, which remotely obtains fundus images for evaluation, reducing medical costs. Telemedicine has been found cost-effective, especially in large populations. Recently, the development of artificial intelligence (AI) has presented an alternative to enhance patient compliance and the efficiency of telemedicine in DR screening. AI can potentially streamline the grading of fundus images, reducing reliance on human resources and improving the screening process.

AI’s origins trace back to 1956 when McCarthy first introduced the concept. Shortly after, in 1959, Arthur Samuel coined the term “machine learning” (ML), emphasising the ability of machines to learn from data without being explicitly programmed. Deep learning (DL), a subset of ML, uses multi-layer neural networks for learning; within this, convolutional neural networks (CNNs) are specialised for image processing, featuring layers designed for pattern recognition.

CNN architectures like AlexNet, VGGNet, and ResNet have been pivotal in advancing AI, achieving high accuracy through end-to-end training on labelled image datasets and optimising parameters via backpropagation algorithms. Transfer learning, another ML technique, leverages pre-trained models on new domains, allowing for effective learning from smaller datasets.

In the medical field, AI’s image processing capabilities have significantly impacted radiology, dermatology, pathology, and ophthalmology. Specifically in ophthalmology, AI assists in diagnosing conditions like DR, glaucoma, and macular degeneration. The FDA’s 2018 approval of the first AI software for DR, IDx-DR, marked a milestone, using Topcon NW400 for capturing fundus images and analysing them via a cloud server to provide diagnostic guidance.

Further developments in AI for ophthalmology include EyeArt and Retmarker DR, both recognised for their high sensitivity and specificity in DR detection. These AI systems have demonstrated advantages in efficiency, accuracy, and reduced demand for human resources. They’ve shown to not only expedite the screening process, as evidenced by an Australian study where AI-based screening took about 7 minutes per patient, but also to outperform manual screenings in both accuracy and patient preference.

AI’s ability to analyse fundus photographs or OCT images at primary care facilities simplifies the screening process, potentially improving patient compliance and significantly reducing ophthalmologists’ workloads. With AI providing immediate grading and recommendations for follow-up or referral, diabetic patients can more easily access and undergo screening, therefore enhancing the management of DR.

To ensure the efficacy and accuracy of AI-based diagnostic systems for diabetic retinopathy (DR), it is crucial to have a well-structured dataset that is divided into separate non-overlapping sets for training, validation, and testing. In the development of AI-based diagnostic systems for diseases such as diabetic retinopathy, the dataset is meticulously organised into three distinct categories—each with a specific function in the training and validation of the algorithm. The training set forms the foundation, where the AI algorithm learns to identify and interpret fundus photographs; this set must be extensive and comprise high-quality images that have been carefully evaluated and labelled by expert ophthalmologists. As per the guidelines provided by Chinese authorities, if the system uses fundus photographs, these images should be collected from a minimum of two different medical institutions to ensure a varied and comprehensive learning experience. Concurrently, the validation set plays a pivotal role in refining the AI parameters, acting as a tool for algorithm optimisation during the development process. Lastly, the testing set is paramount for the real-world evaluation of the AI system’s clinical performance. To preserve the integrity of the results, this set is kept separate from the training and validation sets, preventing any potential biases that could skew the system’s accuracy in practical applications.

The training set should have a diverse range of images, including at least 1,000 single-field FPs or 1,000 pairs of two-field FPs, 500 non-readable FP images or pairs, and 500 images or pairs showing other fundus diseases besides DR. The images should be graded by at least three qualified ophthalmologists, with the majority opinion determining the final grade. For standard testing, a set should include 5,000 FPs or pairs, with no fewer than 2,500 images or pairs for DR stage I and above, and 500 images or pairs for other fundus diseases. A random selection of 2,000 images or pairs should be used to evaluate the AI system’s performance on the DR stages.

Current research has indicated some issues with the training sets used in existing AI systems. These include the use of FPs from a single source and including fewer than the recommended 500 non-readable images or pairs. Furthermore, some training sets sourced from online datasets do not provide access to important patient demographics like gender and age, which can be crucial for comprehensive training and accurate diagnostics.

The Iowa Detection Program (IDP) is an early example of an AI system for diabetic retinopathy (DR) screening that showed promise in Caucasian and African populations by grading fundus photographs (FP) and identifying characteristic lesions, albeit without employing deep learning (DL) techniques. Its sensitivity was commendable, but it suffered from low specificity. In contrast, IDx-DR incorporated a convolutional neural network (CNN) into the IDP framework, enhancing the specificity of DR detection. Clinical studies highlighted that while IDx-DR’s sensitivity in real-world settings didn’t quite match its testing set performance, it nonetheless demonstrated a satisfactory balance of sensitivity and specificity.

EyeArt expanded AI’s reach into mobile technology, becoming the first system to detect DR using smartphones. A study in India involving 296 type 2 diabetes patients revealed a very high sensitivity and reasonable specificity, proving its potential for remote DR screening. Moreover, systems like Google’s AI for DR screening can adjust sensitivity and specificity thresholds to meet clinical needs, suggesting that a hybrid approach of AI and manual screening could maximise efficiency and minimize missed referable DR cases.

However, most AI systems for DR rely on FPs, which are limited to two dimensions and can only detect diabetic macular edema (DME) through the presence of hard exudates in the posterior pole, potentially missing some cases. Optical coherence tomography (OCT), with its higher detection rate for DME, offers a more advanced diagnostic tool. Combining OCT with AI has led to the development of systems with impressive sensitivity, specificity, and area under the curve (AUC) metrics, as reflected in various studies. Despite these advancements, challenges such as accessibility remain, especially in resource-limited areas, as demonstrated by Hwang et al’s AI system for OCT, which still necessitates OCT equipment and the transfer of images to a smartphone, indicating that issues of accessibility for patients in underserved regions persist.

The landscape of AI-based diagnostic systems for diabetic retinopathy (DR) is expansive, yet it confronts numerous challenges. Many systems are trained on online datasets such as Messidor and EyePACS, which are limited by homogeneity in image sources and quality, as well as disease scope. These datasets often fail to encapsulate the diversity of real-world clinical environments, leading to potential misdiagnoses. A lack of standardised protocols for algorithm training exacerbates this, with the variability in sample sizes, image quality, and study designs from different sources undermining the generalisability of these AI systems.

Furthermore, while most research adheres to the International Clinical Diabetic Retinopathy Severity Scale for classifying DR severity, debates continue about its suitability. Some argue that classifications like the Early Treatment Diabetic Retinopathy Study may be more appropriate, as they could reduce unnecessary referrals by better reflecting the slower progression of milder DR forms. Inconsistencies in classification standards among studies affect both algorithm validity and cross-study comparisons.

Compounding these issues is the absence of a unified criterion for evaluating AI algorithms, with significant discrepancies in testing sets and performance metrics such as sensitivity, specificity, and area under the curve (AUC) across studies. Without universal benchmarks, comparing and validating these tools remains challenging. Moreover, AI diagnostics suffer from the “black box” phenomenon—the opaque nature of the decision-making process within AI systems. This obscurity impedes understanding and trust in the algorithms, as users cannot ascertain the rationale behind the AI’s assessments or intervene if necessary.

Legal and ethical concerns also arise, particularly regarding liability for misdiagnoses. The responsibility cannot squarely fall on either the developers or the medical practitioners using AI systems. Presently, this has restricted AI’s application primarily to DR screening. When compounded with obstacles such as cataracts, unclear media, or poor patient cooperation, the reliance on AI is reduced, necessitating ophthalmologist involvement.

Patient data security represents another critical issue. As AI systems for diabetes screening could process vast amounts of personal information, ensuring this data’s use solely for medical purposes and preventing breaches is paramount.

Finally, there’s the limitation of disease specificity in AI systems, where most are trained to detect only DR during fundus examinations. However, some studies have reported AI systems capable of identifying multiple conditions simultaneously, like age-related macular degeneration alongside DR, which could streamline diagnostic processes if widely adopted. Addressing these multifaceted challenges is crucial for the advancement and reliable integration of AI into ophthalmic diagnostics.

Artificial intelligence (AI) holds considerable promise in the field of diabetic retinopathy (DR) screening and diagnosis, with the potential to reshape current approaches significantly. The future could see the proliferation of AI systems designed for portable devices, such as smartphones, enabling patients to conduct DR screenings at home, which may drastically reduce the dependency on professional medical staff and advanced medical equipment. This shift could make DR screening much more accessible, particularly under the constraints imposed by events like the COVID-19 pandemic, where telemedicine’s importance has surged, providing vast benefits and convenience to both patients and healthcare providers.

Most AI-assisted DR screening systems currently rely on traditional fundus imaging. However, as newer examination techniques evolve, AI is expected to integrate with diverse types of ocular assessments, such as multispectral fundus imaging and optical coherence tomography (OCT), which could further enhance diagnostic accuracy. Beyond screening, AI is poised to play a crucial role in DR diagnosis. Some studies have already shown that AI can match or even surpass the sensitivity of human ophthalmologists, supporting the potential of AI-assisted systems to augment the diagnostic process with higher precision and efficiency.

Overall, in countries where DR screening programs are established, integrating AI-based diagnostic systems could significantly alleviate human resource burdens and boost operational efficiency. Despite the optimism, the datasets currently used to train AI algorithms are somewhat restricted in scope. For AI to be more broadly applicable in clinical settings, it’s essential to leverage diverse clinical resources to create more varied datasets and to refine standards for image quality and labeling, ensuring AI systems are both standardised and effective. At this juncture, the technology is not yet at a point where it can replace ophthalmologists entirely. Therefore, in the interim, a combined approach where AI complements the work of medical professionals may offer the most realistic and advantageous path forward for the clinical adoption of AI in DR management.

Links

https://www.gov.uk/guidance/diabetic-eye-screening-programme-overview

https://drc.bmj.com/content/5/1/e000333

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9559815/

https://www.mdpi.com/2504-2289/6/4/152

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30250-8/fulltext

https://diabetesatlas.org/

https://pubmed.ncbi.nlm.nih.gov/20580421/

https://www.aao.org/education/preferred-practice-pattern/diabetic-retinopathy-ppp

https://pubmed.ncbi.nlm.nih.gov/27726962/

https://onlinelibrary.wiley.com/doi/10.1046/j.1464-5491.2000.00338.x

https://iovs.arvojournals.org/article.aspx?articleid=2565719

The Advantages of Quantum Algorithms Over Classical Limitations of Computation

First published 2023

The dawn of the 21st century has witnessed technological advancements that are nothing short of revolutionary. In this cascade of innovation, quantum computing emerges as a frontier, challenging our conventional understanding of computation and promising to reshape industries. For countries aiming to be at the cutting edge of technological progress, quantum computing isn’t just a scientific endeavour; it’s a strategic imperative. The United Kingdom, with its rich history of pioneering scientific breakthroughs, has recognised this and has positioned itself as a forerunner in the quantum revolution. As the UK dives deep into research, development, and commercialisation of quantum technologies, it’s crucial to grasp how quantum algorithms differentiate themselves from classical ones and why they matter in the grander scheme of global competition and innovation.

In the world of computing, classical computers have been the backbone for all computational tasks for decades. These devices, powered by bits that exist in one of two states (0 or 1), have undergone rapid advancements, allowing for incredible feats of computation and innovation. However, despite these strides, there are problems that remain intractable for classical systems. This is where quantum computers, and the algorithms they utilise, offer a paradigm shift. They harness the principles of quantum mechanics to solve problems that are beyond the reach of classical machines.

At the heart of a quantum computer is the quantum bit, or qubit. Unlike the classical bit, which can be either 0 or 1, a qubit can exist in a superposition of both states simultaneously. This allows quantum computers to explore multiple possibilities at once. Furthermore, qubits exhibit another quantum property called entanglement, wherein the state of one qubit can be dependent on the state of another, regardless of the distance between them. These two properties—superposition and entanglement—enable quantum computers to perform certain calculations exponentially faster than their classical counterparts.

One of the most celebrated quantum algorithms is Shor’s algorithm, which factors large numbers exponentially faster than the best-known classical algorithms. Factoring may seem like a simple arithmetic task, but when numbers are sufficiently large, classical computers struggle to factor them in a reasonable amount of time. This is crucial in the world of cryptography, where the security of many encryption schemes relies on the difficulty of factoring large numbers. Should quantum computers scale up to handle large numbers, they could potentially break many of the cryptographic systems in use today.

Another problem where quantum computers show promise is in the simulation of quantum systems. As one might imagine, a quantum system is best described using the principles of quantum mechanics. Classical computers face challenges when simulating large quantum systems, such as complex molecules, because they do not naturally operate using quantum principles. A quantum computer, however, can simulate these systems more naturally and efficiently, which could lead to breakthroughs in fields like chemistry, material science, and drug discovery.

Delving deeper into the potential of quantum computing in chemistry and drug discovery, we find a realm of possibilities previously thought to be unreachable. Quantum simulations can provide insights into the behavior of molecules at an atomic level, revealing nuances of molecular interactions, bonding, and reactivity. For instance, understanding the exact behavior of proteins and enzymes in biological systems can be daunting for classical computers due to the vast number of possible configurations and interactions. Quantum computers can provide a more precise and comprehensive view of these molecular dynamics. Such detailed insights can drastically accelerate the drug discovery process, allowing researchers to predict how potential drug molecules might interact with biological systems, potentially leading to the creation of more effective and targeted therapeutic agents. Additionally, by simulating complex chemical reactions quantum mechanically, we can also uncover new pathways to synthesise materials with desired properties, paving the way for innovations in material science.

Furthermore, Grover’s algorithm is another quantum marvel. While not exponential, this algorithm searches an unsorted database in a time roughly proportional to the square root of the size of the database, which is faster than any classical algorithm can achieve. This speedup, while moderate compared to the exponential gains of Shor’s algorithm, still showcases the unique advantages of quantum computation.

However, it’s important to note that quantum computers aren’t simply “faster” versions of classical computers. They don’t speed up every computational task. For instance, basic arithmetic or word processing tasks won’t see exponential benefits from quantum computing. Instead, they offer a fundamentally different way of computing that’s especially suited to certain types of problems. One notable example is the quantum Fourier transform, a key component in Shor’s algorithm, which allows for efficient periodicity detection—a task that’s computationally intensive for classical machines. Another example is quantum annealing, which finds the minimum of a complex function, a process invaluable for optimisation problems. Quantum computers also excel in linear algebra operations, which can be advantageous in machine learning and data analysis. As the field of quantum computing progresses, alongside the discovery of more quantum algorithms like the Harrow-Hassidim-Lloyd (HHL) algorithm for linear system equations, we can expect to uncover an even broader range of problems for which quantum solutions provide a significant edge.

In conclusion, the realm of quantum computing, driven by the unique properties of quantum mechanics, offers the potential to revolutionise how we approach certain computational problems. From cryptography to quantum simulation, quantum algorithms leverage the power of qubits to solve problems that remain intractable for classical machines. As our understanding and capabilities in this domain expand, the boundary between what is computationally possible and impossible may shift in ways we can’t yet fully predict.

Links

https://www.bcg.com/publications/2018/coming-quantum-leap-computing

https://research.ibm.com/blog/factor-15-shors-algorithm

https://aisel.aisnet.org/jais/vol17/iss2/3/

https://research.tudelft.nl/files/80143709/DATE_2020_Realizing_qalgorithms.pdf

https://ieeexplore.ieee.org/document/9222275

https://www.nature.com/articles/s41592-020-01004-3

The Implications of Artificial Intelligence Integration within the NHS

First published 2023

This CreateAnEssay4U special edition brings together the work of previous essays and provides a comprehensive overview of an important technological area of study. For source information, see also:

https://createanessay4u.wordpress.com/tag/ai/

https://createanessay4u.wordpress.com/tag/nhs/

The advent and subsequent proliferation of Artificial Intelligence (AI) have ushered in an era of profound transformation across various sectors. Notably, within the domain of healthcare, and more specifically within the context of the United Kingdom’s National Health Service (NHS), AI’s incorporation has engendered a myriad of both unparalleled opportunities and formidable challenges. From an academic perspective, there is a burgeoning consensus that AI might be poised to rank among the most salient and transformative developments in the annals of human progression. It is neither hyperbole nor mere conjecture to assert that the innovations stemming from AI hold the potential to redefine the contours of our societal paradigms. In the ensuing discourse, we shall embark on a rigorous exploration of the multifaceted impacts of AI within the NHS, striving to delineate the promise it holds while concurrently interrogating the potential pitfalls and challenges intrinsic to such profound technological integration.

Medical Imaging and Diagnostic Services play a pivotal role in the modern healthcare landscape, and the integration of AI within this domain has brought forth noteworthy advancements. AI’s robust capabilities for image analysis have not only enhanced the precision in diagnostics but also broadened the scope of early detection across a variety of diseases. Radiology professionals, for instance, increasingly leverage these advanced tools to identify diseases at early stages and thereby minimise diagnostic errors. Echocardiography charts, used to gauge heart patterns and detect conditions such as ischemic heart disease, are another beneficiary of AI’s analytical prowess. An example of this is the Ultromics platform from a hospital in Oxford, which employs AI to meticulously analyse echocardiography scans.

Moreover, the application of AI in diagnostics transcends cardiological needs. From detecting skin and breast cancer, eye diseases, pneumonia, to even predicting psychotic occurrences, AI’s potential in medical diagnostics is vast and promising. Neurological conditions like Parkinson’s disease can be identified through AI tools that examine speech patterns, predicting its onset and progression. In the realm of endocrinology, a study used machine learning models to foretell the onset of diabetes, revealing that a two-class augmented decision tree was most effective in predicting diabetes-associated variables.

Furthermore, the global threat of COVID-19 in 2019 also saw AI playing a crucial role in early detection and diagnosis. Numerous medical imaging tools, encompassing X-rays, CT scans, and ultrasounds, employed AI techniques to assist in the timely diagnosis of the virus. Recent studies have spotlighted AI’s efficacy in differentiating COVID-19 from other conditions like pneumonia using imaging modalities like CT scans and X-rays. The surge in AI-based diagnostic tools, such as the deep learning model known as the transformer, facilitates efficient management of COVID-19 cases by offering rapid and precise analyses. Notably, the ImageNet-pretrained vision transformer was used to identify COVID-19 cases using chest X-ray images, showcasing the adaptability and precision of AI in response to pressing global health challenges.

Moreover, advancements in AI aren’t limited to diagnostic models alone. The field has seen the emergence of tools like Generative Adversarial Networks (GANs), which have considerably influenced radiological practices. Comprising a generator that produces images mirroring real ones, and a discriminator that differentiates between the two, GANs have the potential to redefine radiological operations. Such networks can replicate training images and create new ones with the training dataset’s characteristics. This technological advancement has not only aided in tasks like abnormal detection and image synthesis but has also posed challenges even for experienced radiologists, as discerning between GAN-generated and real images becomes increasingly intricate.

Education and research also stand to benefit immensely from such advancements. GANs have the potential to swiftly generate training material and simulations, addressing gaps in student understanding. As an example, if students struggle to differentiate between specific medical conditions in radiographs, GANs could produce relevant samples for clearer understanding. Additionally, GANs’ capacity to model placebo groups based on historical data can revolutionise clinical trials by minimising costs and broadening the scope of treatment arms.

Furthermore, the role of AI in offering virtual patient care cannot be overstated. In a time where in-person visits to medical facilities posed risks, AI-powered tools bridged the gap by facilitating remote consultations and care. Moreover, the management of electronic health records has been vastly streamlined due to AI, reducing the administrative workload of healthcare professionals. It’s also reshaping the dynamics of patient engagement, ensuring they adhere to their treatment plans more effectively.

The impact of AI on healthcare has transcended beyond diagnostics, imaging, and patient care, making significant inroads into drug discovery and development. AI-driven technologies, drawing upon machine learning, bioinformatics, and cheminformatics, are revolutionising the realm of pharmacology and therapeutics. With the increasing challenges and sky-high costs associated with drug discovery, these technologies streamline the processes and drastically reduce the time and financial investments required. Historical precedents, like the AI-based robot scientist named Eve, stand as a testament to this potential. Eve not only accelerated the drug development process but also ensured its cost-effectiveness.

AI’s capabilities are not just confined to the initial phase of scouting potential molecules in the field of drug discovery. There’s a promise that AI could engage more dynamically throughout the drug discovery continuum in the near future. The numerous AI-aided drug discovery successes in the literature are a testament to this potential. A notable instance is the work by Toronto-based firm, deep genomics. Harnessing the power of an AI workbench platform, they identified a novel genetic target and consequently developed the drug candidate DG12P1, aimed at treating a rare genetic variant of Wilsons’ disease.

One of the crucial aspects of drug development lies in identifying novel drug targets, as this could pave the way for pioneering first-in-class clinical drugs. AI proves indispensable here. It not only helps in spotting potential hit and lead compounds but also facilitates rapid validation of drug targets and the subsequent refinement in drug structure design. Another noteworthy application of AI in drug development is its ability to predict potential interactions between drugs and their targets. This capability is invaluable for drug repurposing, enabling existing drugs to swiftly progress to subsequent phases of clinical trials.

Moreover, with the data-intensive nature of pharmacological research, AI tools can be harnessed to sift through massive repositories of scientific literature, including patents and research publications. By doing so, these tools can identify novel drug targets and generate innovative therapeutic concepts. For effective drug development, models can be trained on extensive volumes of scientific data, ensuring that the ensuing predictions or recommendations are rooted in comprehensive research.

Furthermore, AI’s applications aren’t just limited to drug discovery and design. It’s making tangible contributions in drug screening as well. Numerous algorithms, such as extreme learning machines, deep neural networks (DNNs), random forests (RF), support vector machines (SVMs), and nearest-neighbour classifiers, are now at the forefront of virtual screening. These are employed based on their synthesis viability and their capacity to predict in vivo toxicity and activity, thereby ensuring that potential drug candidates are both effective and safe.

The proliferation of AI in various sectors has brought along with it a range of ethical and social concerns that intersect with broader questions about technology, data usage, and automation. Central among these concerns is the question of accountability. As AI systems become more integrated into decision-making processes, especially in sensitive areas like healthcare, who is held accountable when things go wrong? The possibility of AI systems making flawed decisions, often due to intrinsic biases in the datasets they are trained on, can lead to catastrophic outcomes. An illustration of such a flaw was observed in an AI application that misjudged pneumonia-related complications and potentially jeopardised patients’ health. These erroneous decisions, often opaque in nature due to the intricate inner workings of machine learning algorithms, further fuel concerns about transparency and accountability.

Transparency, or the lack thereof, in AI systems poses its own set of challenges. As machine learning models continually refine and recalibrate their parameters, understanding their decision-making process becomes elusive. This obfuscation often referred to as the ‘black-box’ phenomenon, hampers trust and understanding. The branch of AI research known as “Explainable Artificial Intelligence (XAI)” attempts to remedy this by making the decision-making processes of AI models understandable to humans. Through XAI, healthcare professionals and patients can glean insights into the rationale behind diagnostic decisions made by AI systems. Furthermore, this enhances the trust quotient, as evidenced by studies that underscore the importance of visual feedback in fostering trust in AI models.

Another prominent concern is the potential reinforcement of existing societal biases. AI systems, trained on historically accumulated data, can inadvertently perpetuate and even amplify biases present in the data, leading to skewed and unjust outcomes. This is particularly alarming in healthcare, where decisions can be a matter of life and death. This threat is further compounded by data privacy and security issues. AI systems that process sensitive patient information become prime targets for cyberattacks, risking unauthorised access or tampering of data, with motives ranging from financial gain to malicious intent.

The rapid integration of AI technologies in healthcare underscores the need for robust governance. Proper governance structures ensure that regulatory, ethical, and trust-related challenges are proactively addressed, thereby fostering confidence and optimising health outcomes. On an international level, regulatory measures are being established to guide the application of AI in domains requiring stringent oversight, such as healthcare. The European Union, for instance, introduced the GDPR in 2018, setting forth data protection standards. More recently, the European Commission proposed the Artificial Intelligence Act (AIA), a regulatory framework designed to ensure the responsible adoption of AI technologies, mandating rigorous assessments for high-risk AI systems.

From a technical standpoint, there are further substantial challenges to surmount. For AI to be practically beneficial in healthcare settings, it needs to be user-friendly for healthcare professionals (HCPs). The technical intricacies involved in setting up and maintaining AI infrastructure, along with concerns of data storage and validity, often act as deterrents. AI models, while potent, are not infallible. They can manifest shortcomings, such as biases or a susceptibility to being easily misled. It is, therefore, imperative for healthcare providers to strategise effectively for the seamless implementation of AI systems, addressing costs, infrastructure needs, and training requirements for HCPs.

The perceived opaqueness of AI-driven clinical decision support systems often makes HCPs sceptical. This, combined with concerns about the potential risks associated with AI, acts as a barrier to its widespread adoption. It is thus imperative to emphasise solutions like XAI to bolster trust and overcome the hesitancy surrounding AI adoption. Furthermore, integrating AI training into medical curricula can go a long way in ensuring its safe and informed usage in the future. Addressing these challenges head-on, in tandem with fostering a collaborative environment involving all stakeholders, will be pivotal for the responsible and effective proliferation of AI in healthcare. Recent events, such as the COVID-19 pandemic and its global implications alongside the Ukraine war, underline the pressing need for transformative technologies like AI, especially when health systems are stretched thin.

Given these advancements, it is pivotal however to scrutinise the sources of this information. Although formal conflicts of interest should be declared in publications, authors may have subconscious biases, for and against, the implementation of AI in healthcare, which may influence the authors’ interpretations of the data. Discussions are inevitable regarding published research, particularly since the concept of ‘false positive findings’ came to the forefront in 2005 in a review by John Ioannidis (“Why Most Published Research Findings Are False”). The observation that journals are biased in publishing more papers that have positive rather than negative findings both skews the total body of the evidence and underscores the need for studies to be accurate, representative, and negligibly biased. When dealing with AI, where the risks are substantial, relying solely on justifiable scientific evidence becomes imperative. Studies that are used for the implementation of AI systems should be well mediated by a neutral and independent third party to ensure that any advancements in AI system implementations are based solely on justified scientific evidence, and not on personal opinions, commercial interests or political views.

The evidence reviewed undeniably points to the potential of AI in healthcare. There is no doubt that there is real benefit in a wide range of areas. AI can enable services to be run more efficiently, allow selection of patients who are most likely to benefit from a treatment, boost the development of drugs, and accurately recognise, diagnose, and treat diseases and conditions.

However, with these advancements come challenges. We identified some key areas of risk: the creation of good quality big data and the importance of consent; the data risks such as bias and poor data quality; the issue of a black box (lack of transparency of algorithms); data poisoning; and data security. Workforce issues were also identified: how AI works with the current workforce and the fear of workforce replacement; the risk of de-skilling; and the need for education and training, and embedding change. It was also identified that there is a current need for research into use, cost-effectiveness, and long-term outcomes of AI systems. There will always be a risk of bias, error chance statistical improbabilities, in research and published studies fundamentally due to the nature of science itself. Yet, the aim is to have a body of evidence that helps create a consensus of opinion.

In summary, the transformative power of AI in the healthcare sector is unequivocal, offering advancements that have the potential to reshape patient care, diagnostics, drug development, and a myriad of other domains. These innovations, while promising, come hand in hand with significant ethical, social, and technical challenges that require careful navigation. The dual-edged sword of AI’s potential brings to light the importance of transparency, ethical considerations, and robust governance in its application. Equally paramount is the need for rigorous scientific evaluation, with an emphasis on neutrality and comprehensive evidence to ensure AI’s benefits are realised without compromising patient safety and care quality. As the healthcare landscape continues to evolve, it becomes imperative for stakeholders to strike a balance between leveraging AI’s revolutionary capabilities and addressing its inherent challenges, all while placing the well-being of patients at the forefront.

This CreateAnEssay4U special edition brings together the work of previous essays and provides a comprehensive overview of an important technological area of study. For source information, see also:

https://createanessay4u.wordpress.com/tag/ai/

https://createanessay4u.wordpress.com/tag/nhs/

Links

https://www.gs1ca.org/documents/digital_health-affht.pdf

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7670110/

https://www.who.int/emergencies/diseases/novel-coronavirus-2019/technical-guidance/naming-the-coronavirus-disease-(COVID-2019)-and-the-virus-that-causes-it

https://www.rcpjournals.org/content/futurehosp/9/2/113

https://doi.org/10.1016%2Fj.icte.2020.10.002

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9151356/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7908833/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8285156/

https://pubmed.ncbi.nlm.nih.gov/32665978

https://doi.org/10.1016%2Fj.ijin.2022.05.002

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8669585/

https://scholar.google.com/scholar_lookup?journal=Med.+Image+Anal.&title=Transformers+in+medical+imaging:+A+survey&author=F.+Shamshad&author=S.+Khan&author=S.W.+Zamir&author=M.H.+Khan&author=M.+Hayat&publication_year=2023&pages=102802&pmid=37315483&doi=10.1016/j.media.2023.102802&

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8421632/

https://www.who.int/docs/defaultsource/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf

https://www.bbc.com/news/health-42357257

https://www.ibm.com/blogs/research/2017/1/ibm-5-in-5-our-words-will-be-the-windows-to-our-mental-health/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10057336/

https://doi.org/10.48550%2FarXiv.2110.14731

https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

https://scholar.google.com/scholar_lookup?journal=Proceedings+of+the+IEEE+15th+International+Symposium+on+Biomedical+Imaging&title=How+to+fool+radiologists+with+generative+adversarial+networks?+A+visual+turing+test+for+lung+cancer+diagnosis&author=M.J.M.+Chuquicusma&author=S.+Hussein&author=J.+Burt&author=U.+Bagci&pages=240-244&

https://pubmed.ncbi.nlm.nih.gov/23443421

https://www.nuffieldbioethics.org/assets/pdfs/Artificial-Intelligence-AI-in-healthcare-and-research.pdf

https://link.springer.com/article/10.1007/s10916-017-0760-1

Turing’s Vision: Navigating the Landscape of Ethical and Safe AI

First published 2022; revised 2023

In the dawn of the artificial intelligence era, there is an imperative need to navigate the complexities of AI ethics and safety. Ensuring that AI systems are both safe and ethically sound is no longer just a theoretical concern but a pressing practical issue that affects the global threads of industry, governance, and society at large. Drawing insights from Leslie, D. (2019) in “Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector”, published by The Alan Turing Institute, this essay explores the varied dimensions of AI’s responsible design and implementation.

The Alan Turing Institute forges its position as an aspirational, world-leading hub that examines the technical intricacies that underpin safe, ethical, and trustworthy AI. Committed to fostering responsible innovation and pioneering research breakthroughs, the Institute aims to go beyond mere theoretical discourses. It envisions a future where AI not only advances in capabilities but also upholds the core values of transparency, fairness, robustness, and human-centered design. Such an ambition necessitates a commitment to advancing AI transparency, ensuring the fairness of algorithmic systems, forging robust systems resilient against external threats, and cultivating AI-human collaborations that maintain human control.

However, the quest to realise this vision is not an isolated endeavour. It requires broad, interdisciplinary collaborations, connecting the dots between technical experts, industry leaders, policy architects, and the public. Aligning with the UK government’s Industrial Strategy and meeting the burgeoning global demand for informed guidance in AI ethics, the Institute’s strategy serves as a blueprint for those committed to the responsible growth of AI. However, it’s essential to remember that the responsible evolution of AI is not just about mastering the technology but understanding its implications for the broader context of our society.

The dawn of the information age has been marked by an extraordinary convergence of factors: the expansive availability of big data, the unparalleled speed and reach of cloud computing platforms, and the maturation of intricate machine learning algorithms. This synergy has propelled us into an era of unmatched human potential, characterised by a digitally interwoven world where the power of AI stands as a beacon of societal improvement.

Already, we witness the profound impact of AI across various sectors. Essential social domains such as healthcare, education, transportation, food supply, energy, and environmental management have all been beneficiaries of AI-driven innovations. These accomplishments, however significant they may appear now, are perhaps only the tip of the iceberg. AI’s very nature, its inherent capability to evolve and refine itself with increased access to data and surging computing power, guarantees its continuous ascent in efficacy and utility. As we navigate further into the information age, it’s conceivable that AI will soon stand at the forefront, guiding the progression of critical public interests and shaping the contours of sustainable human development.

Such a vision, where AI aids humanity in addressing its most pressing challenges, is undeniably exhilarating. Yet, like any frontier technology that’s rapidly evolving, AI’s journey is fraught with pitfalls. A steep learning trajectory ensures that errors, misjudgments, and unintended consequences are not just possible but inevitable. AI, despite its immense promise, is not immune to these challenges.

Addressing these challenges is not a mere recommendation but a necessity. It is imperative to prioritise AI ethics and safety to ensure its responsible evolution and to maximise its public benefit. This means an in-depth integration of social and ethical considerations into every facet of AI deployment. It calls for a harmonised effort, requiring data scientists, product managers, data engineers, domain experts, and delivery managers to work in unison. Their collective goal? To align AI’s development with ethical values and principles that not only prevent harm but actively enhance the well-being of communities that come under its influence.

The emergence of the field of AI ethics is a testament to this necessity. Born out of a growing recognition of the potential individual and societal harms stemming from AI’s misuse, poor design, or unforeseen repercussions, AI ethics seeks to provide a compass by which we navigate the AI-driven future responsibly.

Understanding the evolution of AI and its implications requires us to first recognise the genesis of AI ethics. The eminent cognitive scientist and AI trailblazer, Marvin Minsky, once described AI as the art of enabling computers to perform tasks that, when done by humans, necessitate intelligence. This fundamental definition highlights a crucial aspect of the discourse surrounding AI: humans, when undertaking tasks necessitating intelligence, are held to standards of reliability, accuracy, and sound reasoning. We expect them to justify their decisions, and to act with fairness, equity, and reasonableness in their interactions.

However, the rise and spread of AI technologies have reshaped this landscape. As AI systems take over myriad cognitive functions, they introduce a conundrum. Unlike humans, these algorithmic processes aren’t directly accountable for their actions, nor can they be held morally responsible for the outcomes they produce. Essentially, while AI systems exhibit a form of ‘smart agency’, they lack inherent moral responsibility, creating a discernible ethical void.

Addressing this void has become paramount, giving birth to a host of frameworks within AI ethics. One such framework is the FAST Track Principles, which stands for Fairness, Accountability, Sustainability, and Transparency. These principles are designed to bridge the gap between AI’s capabilities and its intrinsic moral void. To foster an environment conducive to responsible AI development, it is vital that every stakeholder, from data scientists to policy experts, familiarises themselves with the FAST Track Principles. These principles should guide actions and decisions throughout the AI project lifecycle, underscoring the idea that creating ethical AI is a collective endeavor.

Delving deeper into the principle of fairness, one must remember that while AI systems might project a veneer of neutrality, they are ultimately products of human design. Humans, with all their inherent biases and contextual limitations, play a pivotal role in AI’s creation. At any stage of an AI project, from data extraction to model building, the spectres of human error, prejudice, and misjudgment can introduce biases. Moreover, AI systems often derive their accuracy by analysing data that might encapsulate age-old societal biases and discriminations, further complicating the fairness equation.

Addressing fairness in AI is far from straightforward. There isn’t a singular, foolproof method to eliminate biases or ensure fairness. However, by adopting best practices that focus on fairness-aware design and implementation, there’s potential to create systems that yield just and equitable outcomes. One foundational approach to fairness is the principle of discriminatory non-harm. It mandates that AI innovations should not result in harm due to biased or discriminatory outcomes. This principle, while seemingly basic, serves as a cornerstone, directing the development and deployment of AI systems towards a more equitable and fair future.

The Principle of Discriminatory Non-Harm sets forth that AI system designers and users should be deeply committed to reducing biases and preventing discriminatory outputs, especially when dealing with social or demographic data. This implies a few specific obligations. First, AI systems should be built upon data that is representative, accurate, and generalisable, ensuring “Data Fairness.” Second, the systems’ design should not include any variables, features, or processes that are morally objectionable or unjustifiable – this is “Design Fairness.” The systems should also be crafted to avoid producing discriminatory effects on individuals or groups – ensuring “Outcome Fairness.” Lastly, the onus is on the users to be adequately trained to use AI systems responsibly, embodying “Implementation Fairness.”

When considering the concept of Accountability in AI, the best practices for data processing as mentioned in Principle 6 of the Data Ethics Framework come to mind. However, the ever-evolving AI landscape brings forward distinct challenges, especially in public sector accountability. Two major challenges emerge: the “accountability gap” and the multifaceted nature of AI production processes. Automated decisions, inherently, are not self-explanatory. Unlike human agents, statistical models and AI’s underlying infrastructure don’t bear moral responsibility, creating a void in accountability. Coupled with this is the intricate nature of AI project deliveries involving a myriad of stakeholders, making it a daunting task to pinpoint responsibility if an AI system’s implementation has adverse consequences.

To address these challenges, it’s imperative to adopt a comprehensive approach to accountability that encompasses both Answerability and Auditability. Answerability stresses that human creators and users of AI systems should take full responsibility for the algorithmically-driven decisions. They should be ready to provide clear, coherent, and non-technical explanations for these decisions, ensuring that every stage of the AI process is accountable. Auditability, on the other hand, focuses on how to hold these AI system designers and implementers accountable. It emphasises the demonstration of both responsible design and use practices, and the justifiability of the outcomes.

Another critical pillar is Sustainability. AI system designers and users must be continually attuned to the long-term and transformative effects their technologies might have on individuals and society at large. This proactive awareness ensures that the systems not only address the immediate needs but also consider the long-term societal impacts.

In tandem with sustainability is Safety. Besides considering the broader social ramifications of an AI system, it’s essential to address its technical sustainability and safety. Given that AI operates in an unpredictable environment, achieving technical safety becomes a challenging task. However, the importance of building a safe and reliable AI system cannot be overstated, especially when potential failures could result in harmful consequences and erode public trust. To achieve this, emphasis must be placed on the core technical objectives of accuracy, reliability, security, and robustness. This involves rigorous testing, consistent validation, and frequent reassessment of the system. Moreover, effective oversight mechanisms need to be integrated into the system’s real-world operation to ensure that it functions safely and as intended.

The intrinsic challenges of accuracy in artificial intelligence systems can be linked to the inherent complexities and unpredictability of the real world. When trying to model this chaotic reality, it’s a significant task to ensure that an AI system’s predictions or classifications are precise. Data noise, which is unavoidable, combined with the potential that a model might not capture all aspects of the underlying patterns and changes in data over time, can all contribute to these challenges.

On the other hand, the reliability of an AI system rests on its ability to consistently function in line with its intended design and purpose. This means that if a system is deemed reliable, users can trust that its operations will adhere to its set specifications, bolstering user confidence in the safety and predictability of its outcomes.

AI systems also face threats on the security front. Security is not just about safeguarding an AI system from potential external threats but also ensuring that the system’s architecture remains uncompromised and that any data or information within it remains confidential. This integrity is paramount, especially when considering the potential adversarial threats that AI systems might face.

Robustness in AI, meanwhile, centres on an AI system’s ability to function effectively even under less than ideal conditions. Whether these conditions arise from intentional adversarial actions, human errors, or misalignments in automated learning objectives, the system’s ability to maintain its integrity is a testament to its robustness.

One of the more nuanced challenges that machine learning models face is the phenomenon of concept drift. When the historical data, which informs the model’s understanding, becomes outdated or misaligned with current realities, the model’s accuracy and reliability can suffer. Therefore, staying attuned to changes in the underlying data distribution is vital. Ensuring that the technical team is aware of the latest research on detecting and managing concept drift will be crucial to the continued success of AI projects.

Another pressing concern in the realm of AI is adversarial attacks. These attacks cleverly manipulate input data, causing AI models to make grossly incorrect predictions or classifications. The subtle nature of these perturbations can lead to significant ramifications, especially in critical systems like medical imaging or autonomous vehicles. Recognising these vulnerabilities, there has been a surge in research in the domain of adversarial machine learning, aiming to safeguard AI systems from these subtle yet disruptive inputs.

Equally concerning is the threat of data poisoning, where the very data that trains an AI system is tampered with, causing the system to generate inaccurate or harmful outputs. This kind of attack can be especially sinister as it might incorporate ‘backdoors’ into the system, which when triggered, can cause malfunctions. Therefore, beyond technical solutions, it becomes imperative to source data responsibly and ensure its integrity throughout the data handling process. The emphasis should be on responsible data management practices to ensure data quality throughout the system’s lifecycle.

In the world of artificial intelligence, the term “transparency” has taken on a nuanced and specialised meaning. While the everyday usage of the term typically evokes notions of clarity, openness, and straightforwardness, in AI ethics, transparency becomes even more multifaceted. One aspect of this is the capacity for AI systems to be interpretable. That is, those interacting with an AI system should be able to decipher how and why the system made a particular decision or acted in a certain way. This kind of transparency is about shedding light on the internal workings of the often enigmatic AI mechanisms, allowing for greater understanding and trust.

Furthermore, transparency isn’t limited to merely understanding the “how” and “why” of AI decisions. It also encompasses the ethical considerations behind both the design and deployment of AI systems. When AI systems are said to be transparent, it implies that they can be justified as ethical, unbiased, trustworthy, and safety-oriented both in their creation and their outcomes. This dual focus on process and product is vital.

In developing AI, teams are tasked with several responsibilities to ensure this two-tiered transparency. First, from a process perspective, there is a need to assure all stakeholders that the entire journey of creating the AI system was ethically sound, unbiased, and instilled with measures ensuring trust and safety. This includes not just designing with these values in mind but also ensuring auditability at every stage.

Secondly, when it comes to the outcome or product of AI, there’s the obligation to make sure that any decision made by the AI system is elucidated in ways that are understandable to non-experts. The explanations shouldn’t merely regurgitate the mathematical or technical jargon but should be phrased in relatable terms, reflecting societal contexts. Furthermore, the results or behaviors of the AI should be defensible, fitting within parameters of fairness, trustworthiness, and ethical appropriateness.

In addition to these tasks, there’s a broader need for professional and institutional transparency. Every individual involved in the AI’s development and deployment should adhere to stringent standards that emphasise values like integrity, honesty, and neutrality. Their primary allegiance should be to the public’s best interests, superseding other considerations.

Moreover, throughout the AI development process, there should be an open channel for public oversight. Of course, certain information may need to remain confidential for valid reasons, like ensuring bad actors can’t exploit the system. But, by and large, the emphasis should be on openness.

Transitioning into the structural aspects of AI development, a Process-Based Governance (PBG) Framework emerges as a crucial tool. Such a framework is pivotal for integrating ethical considerations and best practices seamlessly into the actual development process. The guide might delve into specifics like the CRISP-DM, but it’s worth noting that the principles of responsible AI development can be incorporated into other workflow models, including KDD and SEMMA. Adopting such a framework helps ensure that the values underpinning ethical AI are not just theoretical but find active expression in every phase of the AI’s life cycle.

Alan Turing’s simple sketch in 1936 was nothing short of revolutionary. With just a linear tape, symbols, and a set of rules, he demystified the very essence of calculations, giving birth to the conceptual foundation of the modern computer. His Turing machine wasn’t just a solution to the enigma of effective calculations, it was the conceptual forerunner of the digital revolution we live in today. This innovative leap, stemming from a quiet room at Kings College, Cambridge, is foundational to our digital landscape.

Fast forward to our present day, and we find ourselves immersed in a world where the lines between the physical and digital blur. The seamless interplay of connected devices, sophisticated algorithms, and vast cloud computing platforms is redefining our very existence. Technologies like the Internet of Things and edge computing are not just changing the way we live and work; they’re reshaping the very fabric of our society. AI is becoming more than just a tool or a technology; it is rapidly emerging as the fulcrum upon which our future balances. The possibilities it presents, both optimistic and cautionary, are monumental. It’s essential to realise that the trajectory of AI’s impact lies in our hands. The decisions we make today will shape the society of tomorrow, and the implications of these choices weigh heavily on our collective conscience.

It’s paramount to see that artificial intelligence isn’t just about codes and algorithms. It’s about humanity, our aspirations, our values, and our shared vision for the future. In many ways, the guide on AI ethics and safety serves as a compass, echoing Turing’s ethos by emphasising that the realm of AI, at its core, remains a profoundly human domain. Every line of code, every algorithmic model, every deployment carries with it a piece of human intention, purpose, and responsibility.

In essence, understanding the ethics and safety of AI isn’t just about mitigating risks or optimising outputs. It’s about introspection and realising that behind every technological advancement lie human choices. Responsible innovation isn’t just a catchphrase; it’s a call to action. Only by staying grounded in our shared ethical values and purpose-driven intentions can we truly harness AI’s potential. Let’s not just be passive recipients of technology’s gifts. Instead, let’s actively shape its direction, ensuring that our collective digital future resonates with our shared vision of humanity’s greatest aspirations.

Links

https://www.turing.ac.uk/news/publications/understanding-artificial-intelligence-ethics-and-safety

https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf

Use of Artificial Intelligence in the UK Police Force

First published 2022; revised 2023

Artificial Intelligence (AI) has emerged as a groundbreaking technology with immense potential to transform various sectors, including law enforcement. In the United Kingdom, the integration of AI into the police force has garnered both attention and scrutiny. Despite one report calling for Police forces in the UK to end entirely their use of predictive mapping programs and individual risk assessment programs, AI in policing is growing and shows no signs of let up; according to Deloitte, more than half of UK police forces had planned to invest in AI by 2020. The use of AI in the UK police force, however, has many benefits, challenges, and ethical considerations.

The adoption of AI technologies in the UK police force offers several tangible advantages. One of the primary benefits is enhanced predictive policing. AI algorithms can analyse vast amounts of historical crime data to identify patterns, trends, and potential crime hotspots. This predictive capability allows law enforcement agencies to allocate resources more effectively and proactively prevent criminal activities. Moreover, AI-powered facial recognition technology has been employed to aid in identifying suspects or missing persons. This technology can scan through large databases of images and match them with real-time surveillance footage, assisting officers in locating individuals quickly and efficiently.

However, the integration of AI in policing is not without its challenges. One of the major concerns is the potential for bias in AI algorithms. If the training data used to develop these algorithms is biased, the technology can inadvertently perpetuate and amplify existing biases, leading to discriminatory outcomes, particularly against minority groups. Ensuring fairness and equity in AI-driven law enforcement practices remains a significant hurdle.

Another issue is privacy infringement. The use of facial recognition technology and other surveillance methods can raise concerns about citizens’ right to privacy. Striking a balance between public safety and individual rights is crucial, as unchecked AI implementation could erode civil liberties. Ethical considerations surrounding AI implementation in the UK police force are paramount. Transparency in how AI algorithms operate and make decisions is essential to maintain public trust. Citizens have the right to understand how these technologies are used and what safeguards are in place to prevent misuse.

Additionally, accountability is crucial. While AI can aid decision-making, final judgments should remain within human control. Police officers should not blindly follow AI recommendations but rather use them as tools to support their expertise. Challenges such as bias, privacy concerns, and ethical considerations must be carefully addressed to ensure that AI is a force for positive change and does not infringe upon citizens’ rights or exacerbate societal inequalities. As the technology continues to evolve, it is imperative that the UK police force strikes a balance between harnessing AI’s capabilities and upholding fundamental principles of justice and fairness.

Links

https://committees.parliament.uk/event/18021/formal-meeting-oral-evidence-session/

https://www.nesta.org.uk/blog/making-case-ai-policing/

Quantum Computing and the Future of Cryptography

First published 2022; revised 2023

In recent years, there has been a remarkable growth in the realm of quantum computing, signified by quantum computers possessing 13, 53, and even 433 qubits. This advancement is largely attributed to the notable influx of both public and private investments and initiatives. However, the efficacy of a quantum computer is not merely determined by the sheer number of qubits it houses. The quality of these qubits is equally paramount. Achieving the “quantum advantage” — where a quantum computer surpasses the capabilities of classical computers — hinges on both these factors. The possibility of quantum computers soon delivering this advantage beckons the question: what implications does this have for our daily lives?

One of the most profound impacts is foreseen in the field of cryptography. In our modern, information-driven society, the importance of privacy cannot be understated. Every day, vast quantities of confidential data traverse the internet. The bedrock ensuring the security of these exchanges is computational complexity. The encryption methods we rely upon today are founded on mathematical problems so intricate that for any would-be interceptor, decoding this information would be a herculean task, taking an inconceivable number of years. A quintessential example of this security methodology is the RSA protocol, named after its inventors Ron Rivest, Adi Shamir, and Leonard Adleman.

The robustness of the RSA protocol is firmly rooted in the arduous task of factorising large numbers, particularly those that are the product of two large prime numbers. Consider, for example, the process of factorising a number like 15. On a basic level, this is relatively simple because 15 is the product of 3 and 5. However, when we delve into the realm of RSA, the numbers involved are exponentially larger, often hundreds of digits long. To decrypt a message encoded using RSA, one must break down such a colossal number into its prime factors. With the processing power of today’s classical computers, this task is equivalent to searching for a needle in a haystack the size of a planet. Even the world’s most powerful supercomputers would need several lifetimes to make a dent in this problem. But with the emergence of quantum computing, the landscape of encryption is on the brink of a seismic shift. Quantum computers, leveraging the principles of quantum mechanics, can process multiple possibilities simultaneously. Shor’s algorithm, for instance, is a quantum algorithm that promises to factorise large numbers exponentially faster than classical methods. If an operational quantum computer were to implement Shor’s algorithm, what would currently take a supercomputer millions of years to compute could potentially be achieved by the quantum computer in just a few hours, or even minutes. This dramatic acceleration in processing capabilities not only threatens the RSA protocol but challenges the very foundation upon which much of our digital security rests.

This looming threat has spurred cryptographers into action, leading to the pursuit of “quantum-safe security.” Two primary strategies have emerged in this quest: post-quantum cryptography and quantum key distribution.

Post-quantum cryptography seeks to uphold the time-tested security paradigm of computational complexity. The challenge is to unearth mathematical problems that remain insurmountable, even for quantum computers. Researchers have fervently embarked on this mission, and in 2022, the National Institute of Standards and Technology (NIST) announced its selected candidates for these novel algorithms. A salient advantage of post-quantum cryptography is its software basis, making it cost-effective and seamlessly integrable with current infrastructures. However, it’s imperative to acknowledge its inherent risk. The durability of these algorithms against quantum onslaughts is yet unproven, and there remains the remote possibility that even classical computers might decipher them.

On the other hand, quantum key distribution diverges from complexity-based security, anchoring its strength in the fundamental laws of quantum physics. Here, secret keys are disseminated using qubits, and any unauthorised interference is instantly detectable due to quantum principles. While its reliability is validated by repeated experiments, the need for specialised quantum hardware makes it a costly endeavour and poses challenges for integration with existing systems. The debate between these two methods often polarises opinions. However, a holistic perspective suggests a symbiotic approach, harnessing the strengths of both post-quantum cryptography and quantum key distribution. Such a fusion would compel hackers to grapple simultaneously with intricate computational challenges and the unpredictable realm of quantum mechanics, fortifying our digital defences for the quantum era ahead.

Links

https://www.forbes.com/sites/forbestechcouncil/2023/04/18/15-significant-ways-quantum-computing-could-soon-impact-society/

https://www.digicert.com/blog/the-impact-of-quantum-computing-on-society

https://www.investmentmonitor.ai/tech/what-is-quantum-computing-and-how-will-it-impact-the-future/

Artificial Intelligence in Dentistry

First published 2021; revised 2023

Artificial intelligence (AI) has significantly expanded its role and importance across various sectors, including dentistry, where it can replicate human intelligence to perform intricate predictive and decision-making tasks within the healthcare field, especially in the context of endodontics. AI models, such as convolutional neural networks and artificial neural networks, have displayed a wide array of applications in endodontics. These applications encompass tasks like analysing root canal system anatomy, predicting the viability of dental pulp stem cells, determining working lengths, detecting root fractures and periapical lesions, and forecasting the success of retreatment procedures.

The potential future applications of AI in this domain have also been explored, encompassing areas such as appointment scheduling, patient care enhancement, drug-drug interaction assessments, prognostic diagnostics, and robotic-assisted endodontic surgery. In the realm of disease detection, assessment, and prognosis, AI has demonstrated impressive levels of accuracy and precision. It has the capacity to contribute to the advancement of endodontic diagnosis and therapy, ultimately improving treatment outcomes.

AI operates through two distinct phases: the “training” phase and the “testing” phase. During the training phase, the model’s parameters are established using training data. Subsequently, the model uses data from previous instances, which could include patient data or data from diverse example datasets. These parameters are then applied to the test datasets.

In the past, artificial intelligence models were often described as “black boxes” because they produced outputs without providing any explanation regarding the rationale behind their decisions. However, contemporary AI functions differently. When given an input, such as an image, it generates a “heatmap” and offers a prediction, like identifying the image as a “cat.” This generated heatmap visually represents the input variables, like the pixels, that influenced the prediction. Consequently, this allows for a clear distinction between dependable and pertinent prediction techniques. For instance, it enables the categorisation of cat photos by focusing on features like the cat’s ears and nose, making the process both safer and more relevant.

Thanks to the rapid advancement of three key pillars of contemporary AI technology—namely, the proliferation of big data from digital devices, increased computational capacity, and the refinement of AI algorithms—AI applications have gained traction in enhancing the convenience of people’s lives. In the field of dentistry, AI has found its place across all dental specialties, including operative dentistry, periodontics, orthodontics, oral and maxillofacial surgery, and prosthodontics.

The majority of AI applications in dentistry have been channeled into diagnostic tasks that rely on radiographic or optical images. Other tasks have not seen as much applicability beyond image-based functions, primarily due to challenges related to data availability, data consistency, and the computational capabilities required for handling three-dimensional (3D) data.

In the field of endodontics, artificial intelligence is becoming increasingly significant. Its importance is currently on the rise in both treatment planning and disease diagnosis within endodontics. AI-based networks have the capability to detect even the most subtle changes, down to the level of a single pixel, which might go unnoticed by the human eye. A few of its applications in endodontics include root canal system anatomy, root fractures, periapical lesions, dental caries, locating minor apical foramen and success of retreatment.

Evidence-based dentistry (EBD) stands as the benchmark for decision-making in the dental profession, with AI machine learning (ML) models serving as complementary tools that learn from human expertise. ML can be viewed as an additional valuable resource to aid dental professionals across various stages of clinical cases. The dental industry experiences swift development and adoption of emerging technologies, and among these, AI stands out as one of the most promising. It offers significant advantages, including high accuracy and efficiency when trained on unbiased data with a well-optimised algorithm. Dental professionals can regard AI as an auxiliary resource to alleviate their workload and enhance the precision and accuracy of tasks related to diagnosis, decision-making, treatment planning, forecasting treatment results, and predicting disease outcomes.

Links

https://www.cbsnews.com/news/ai-artificial-intelligence-dentists-health-care/

https://instituteofdigitaldentistry.com/news/the-role-of-ai-in-dentistry/

https://adanews.ada.org/ada-news/2023/june/artificial-intelligence-and-dentistry/

https://www.frontiersin.org/articles/10.3389/fdmed.2023.1085251/full

https://head-face-med.biomedcentral.com/articles/10.1186/s13005-023-00368-z

Issues Surrounding Black Box Algorithms in Surveillance

First published 2021; revised 2023

The rapid advancement of technology has transformed the landscape of surveillance, enabling the collection and analysis of vast amounts of data for various purposes, including security and law enforcement. Black box algorithms, also known as opaque or inscrutable algorithms, are complex computational processes that generate outputs without offering clear insights into their decision-making mechanisms. While these algorithms have demonstrated impressive capabilities, their use in surveillance systems raises significant concerns such as issues of transparency, accountability, bias, and potential infringements on civil liberties.

One of the primary problems with black box algorithms is their lack of transparency. These algorithms make decisions based on intricate patterns and correlations within data, which makes it difficult for even their developers to fully comprehend their decision-making processes. This opacity prevents people under surveillance from understanding why certain actions or decisions are taken against them. This lack of transparency raises questions about the legitimacy of the surveillance system, as people have a right to know the basis on which they are monitored.

The complexity of black box algorithms also creates challenges in attributing responsibility for any errors or unjust actions. If a surveillance system using black box algorithms makes wrong outcomes or infringes a person’s rights, it becomes challenging to hold anyone accountable. This accountability gap undermines the principles of justice and fairness and leaves people without recourse in case of harm.

Black box algorithms can inherit biases present in the data they are trained on. Surveillance systems using biased data can perpetuate and even amplify existing societal biases, leading to discriminatory outcomes. For example, if historical data reflects biased policing practices, a black box algorithm trained on such data might disproportionately target certain demographic groups, exacerbating social inequalities and eroding trust in law enforcement agencies.

The use of black box algorithms in surveillance also raises concerns about privacy and civil liberties. When these black box algorithms analyse and interpret personal information without clear guidelines, they may invade people’s privacy rights. As surveillance becomes more pervasive and intrusive, people might feel like their fundamental rights are being violated, which might cause societal unrest and resistance to the use of surveillance using black box algorithms.

The implementation of black box algorithms in surveillance often happens without enough public oversight or informed consent. This lack of transparency can lead to public mistrust because people are left in the dark about the extent and nature of the surveillance practices employed by authorities. Effective governance and democratic control over surveillance are compromised when decisions are made behind a shroud of complexity. To address these issues, it is essential to strike a balance between technological innovation and safeguarding individual rights. Policymakers, technologists, and civil society must collaborate to develop comprehensive regulations and frameworks that ensure transparency, accountability, and the protection of civil liberties in the ever-evolving landscape of surveillance technology.

Links

https://www.eff.org/deeplinks/2023/01/open-data-and-ai-black-box

https://towardsdatascience.com/black-box-theres-no-way-to-determine-how-the-algorithm-came-to-your-decision-19c9ee185a8

https://policyreview.info/articles/analysis/black-box-algorithms-and-rights-individuals-no-easy-solution-explainability

AI in Healthcare: Navigating Biases and Inequities During COVID-19

First published 2021

The COVID-19 pandemic has hit disadvantaged communities particularly hard, worsened by systemic racism, marginalisation, and structural inequality, leading to adverse health outcomes. These communities have faced increased economic instability, higher disease exposure, and more severe infections and deaths. In the realm of health informatics, AI technologies play a crucial role. However, inherent biases in their algorithms can unintentionally amplify these existing inequalities. This issue is of significant concern in managing COVID-19, where biased AI models might adversely affect underrepresented groups in training datasets, or deepen health disparities in clinical decision-making.

Health inequalities in AI systems are often due to issues like unrepresentative training data and development biases. Vulnerable groups frequently are underrepresented, stemming from limited healthcare access or biases in data collection. This results in AI systems that are not as effective for these groups. Furthermore, failing to include important demographic variables in AI models can lead to unequal performance across different subgroups, disproportionately affecting vulnerable populations. This highlights the importance of creating AI systems in healthcare that are inclusive and equitable. Addressing biases and ensuring fair use of AI is vital to reduce health inequalities, particularly during the pandemic.

In healthcare, AI technologies depend heavily on large datasets for their algorithms. Yet, these datasets can carry biases from existing practices and institutional norms. Consequently, AI models developed with these biased datasets often replicate existing inequities. In clinical and public health settings, a range of factors contribute to biases in AI systems. Biased judgment and decision-making, discriminatory healthcare processes, policies, and governance can influence various sources of data, including electronic health records, clinical notes, training curriculums, clinical trials, academic studies, and public health monitoring records. For example, during clinical decision-making, established biases against marginalised groups, such as African American and LGBTQ+ patients, can influence the clinical notes taken by healthcare workers.

Currently, natural language processing AI, which involves reading written text and interpreting it for coding, can also be a source of unconscious biases in AI systems. It can process these medical documents and then code them as data. This data is then used to make inferences from large datasets. If an AI system interprets medical notes where there are well-established human biases, such as the disproportionate recording of particular questions towards African American or LGBTQ+ patients, the AI could then generate a link between these characteristics. Consequently, these real-world biases will be silently reinforced and multiplied in the AI system, potentially leading to systematic racial and homophobic biases.

When these notes, often in free text, are used by natural language processing technologies to identify symptom profiles or phenotypic characteristics, the biases inherent in them are also likely to be transferred into the AI systems. This cycle of bias from human to machine perpetuates and amplifies discriminatory patterns within AI applications in healthcare: if, during development, an AI model learns a poor approximation of the true relationship between data, it is likely that at least some of the outputted results will be incorrect. This issue is known as poor ‘Predictive Accuracy’. Additionally, biases originating from the raw data itself present another challenge. Even with access to sufficient ‘Big Data’ during development, the algorithm’s use could lead to clinical errors.

These datasets, forming the foundation of data-driven AI and machine learning models, reflect complex and historically situated practices, norms, and attitudes. As a result, AI models used for diagnosis or prognosis might incorporate biases from previous inequitable practices. Using models trained on these datasets could reinforce or amplify discriminatory structures.

The risk of such discrimination is particularly acute during the COVID-19 pandemic. Hospitals are increasingly using natural language processing technologies to extract diagnostic information from various clinical documents. As these technologies are adapted for identifying symptoms of SARS-CoV-2 infection, the potential for embedding inequality in these AI models increases. If human biases are recorded in clinical notes, these discriminatory patterns are likely to infiltrate the AI models based on them. Moreover, if these models are trained on electronic health records that are unrepresentative or incomplete, reflecting disparities in healthcare access and quality, the resulting AI systems will likely perpetuate and compound pre-existing structural discrimination. This situation highlights the need for more careful and unbiased data collection and model training to ensure AI technologies in healthcare do not exacerbate existing inequalities. It is important to be aware of the potential for data biases, so that during the development of AI systems, the risks can be mitigated, and monitoring can take place to ensure that AI systems do not create biased outputs.

The datasets used to train, test, and validate Artificial Intelligence (AI) models often do not adequately represent the general public. This is especially evident in healthcare, where datasets like electronic health records, genome databases, and biobanks frequently miss out on capturing data from those who have sporadic or limited access to healthcare. This often includes minority ethnic groups, immigrants, and socioeconomically disadvantaged individuals. The issue of representativeness is further compounded by the increased reliance on digital technologies for health monitoring, such as smartphones and symptom tracking apps. In the UK, for example, a significant portion of the population lacks essential digital skills, and a notable percentage do not own smartphones. This digital divide means that datasets derived from mobile technologies and social media may not include or may under-represent individuals without digital access. If an AI system is trained on a specific data set, and then the algorithm is applied to a data set with a slightly varied relative distribution of qualitative data (skin colour, ethnicity, age etc), it can result in a phenomenon called ‘Dataset Shift’, which in turn can lead to erroneous results. A recent study (Yan et al., 2020) found that dataset shift can even be caused by scans from MRI machines with different manufacturers.

Biased datasets in biomedical AI, like those combining pervasive sensing with electronic health records from certain hospitals, exacerbate unrepresentativeness. This becomes critical when disease prevalence and risk factors, which vary across populations, are not adequately represented, leading to AI models with lower sensitivity and underdetection of conditions.

The COVID-19 pandemic highlights this issue, with health data silos in wealthier areas creating biased datasets that, if used for AI training, result in unfair outcomes. Such biases are further compounded by institutional racism and implicit biases in AI development, affecting design choices and leading to discriminatory health-related AI outcomes. This is particularly evident during the pandemic, where urgent solution-seeking and top-down decision-making amplify disparities.

In data handling, errors and improper consolidation can introduce biases against disadvantaged groups. Important design choices, such as including personal data like ethnicity, can significantly affect AI performance for these groups, sometimes embedding structural racism in clinical tools. For instance, consider an AI system developed in the US using a large volume of suitable Big Data and proven to be highly accurate for US patients. If this same system is employed in a UK hospital for patient diagnosis, it might result in a higher rate of misdiagnoses and errors for UK patients. This discrepancy could arise from differences between the AI’s training data and the data it processes subsequently. The training data from the US may include diverse groups of people with variations in ethnicity, race, age, sex, socioeconomic, and environmental factors, influencing the algorithm’s correlations. However, when this system is applied to the UK population, which has its own unique diversity, the AI might deliver less accurate results due to algorithmic bias. This inaccuracy could disproportionately affect certain minority groups in the UK, stemming from a mismatch between the representation in the AI’s training data sets and the data it is later used on.

These examples emphasise the importance of cautious and responsible AI implementation, particularly in critical public health scenarios. It is essential to balance innovative AI development with a keen awareness of health inequities and potential biases. Developing AI safely in healthcare requires a comprehensive approach that encompasses bias mitigation, clinical expertise, community involvement, and an understanding of social determinants such as race and socioeconomic status. Stakeholders must be vigilant about AI biases, utilising public health and ethical frameworks to verify the appropriateness and safety of AI applications. Policymaking in this realm should be inclusive, engaging all stakeholders and prioritising informed consent. Overall, addressing systemic racism and structural inequities is fundamental to ensuring that AI contributes to reducing inequalities rather than perpetuating them.

Links

Yan, W., Huang, L., Xia, L., Gu, S., Yan, F., Wang, Y., & Tao, Q. (2020). MRI Manufacturer Shift and Adaptation: Increasing the Generalizability of Deep Learning Segmentation for MR Images Acquired with Different Scanners. Radiology. Artificial intelligence2(4), e190195. https://doi.org/10.1148/ryai.2020190195

https://doi.org/10.1073/pnas.1900654116

https://doi.org/10.1093/biostatistics/kxz041

https://doi.org/10.1136/bmj.n304

The Role of Artificial Intelligence in Predictive Policing

First published 2021; revised 2023

The advent of the 21st century has brought with it technological innovations that are rapidly changing the face of policing. One such groundbreaking technology is artificial intelligence (AI).

In the United Kingdom, the complexities of implementing AI-driven predictive policing models have been evident in the experiences of the Durham Constabulary. Their ambition was to craft an algorithm that could more accurately gauge the risk a potential offender might pose, guiding the police force in their bail decisions. However, it became evident that the algorithm had a potentially discriminatory bias against impoverished individuals.

The core of the issue lies in the data points chosen. One might believe these data-driven approaches are neutral, using pure, objective information to make decisions. But Durham Constabulary’s inclusion of postcodes as a data determinant raised eyebrows. Using postcodes was found to reinforce negative stereotypes associated with certain neighbourhoods, indirectly causing repercussions like increasing house insurance premiums and reducing property values for everyone in those areas. This revelation prompted the removal of postcode data from the algorithm.

Yet, despite these modifications, inherent issues with data-driven justice persist. Bernard Harcourt, a prominent US law professor, describes a phenomenon termed “the ratchet effect.” As police increasingly rely on AI predictions, individuals and communities that have previously been on their radar continue to be heavily scrutinised. This spirals into these individuals being profiled even more, leading to the identification of more offences within these groups. On the flip side, those who haven’t been heavily surveilled by the police continue their offences undetected, escaping this “ratchet effect.” A clear example of this is the disparity between a street drug user and a middle-class professional procuring drugs online. The former becomes more ensnared in this feedback loop, while the latter largely escapes scrutiny.

The allure of “big data” in policing is undeniable. The potential benefits are substantial. Police resources, increasingly limited, can be more strategically deployed. Bail decisions can be streamlined to ensure only high-risk individuals are incarcerated before trial. The allure rests in proactive rather than reactive policing – addressing crimes before they even occur, thereby saving both resources and the immeasurable societal costs associated with offences.

With advancements in technology, police forces worldwide are leaning heavily into this predictive model. In the US, intricate datasets, considering factors ranging from local weather patterns to social media activity, are used to anticipate potential crime hotspots. Some cities employ acoustic sensors, hidden throughout urban landscapes, to detect and predict gunfire based on background noises.

Recently, New Zealand police have integrated AI tools like SearchX to enhance their tactics. This tool, developed in light of rising gun violence and tragic events such as the demise of police constable Matthew Hunt, is pivotal in instantly drawing connections between suspects, locations, and other risk-related factors, emphasising officer safety. However, the application of such tools is bringing forth serious questions concerning individual privacy, technological bias, and the adequacy of existing legal safeguards.

Given the clandestine nature of many of these AI programmes, New Zealanders possess only a fragmented understanding of the extent to which the police are leveraging these technologies. Cellebrite, a prominent tool that extracts personal data from smartphones and accesses a broad range of social media platforms, and BriefCam, a tool that synthesises video footage, including facial recognition and vehicle licence plates, are both known to be in use. With tools like BriefCam, the police have managed to exponentially speed up the process of analysing CCTV footage. Still, the deployment of certain tools, like Clearview AI, without the necessary approvals underscores the overarching concerns around transparency.

The major allure of AI in policing is its touted capability to foresee and forestall criminal activities. Yet, the utilisation of tools like Cellebrite and BriefCam comes with the palpable erosion of personal privacy. Current legislation, such as The Privacy Act 2020, permits police to gather and scrutinise personal data, sometimes without the knowledge or consent of the individuals in question.

Moreover, AI tools are not exempt from flaws. Their decisions are often influenced by the data they’re trained on, which can contain biases from historical practices and societal prejudices. Often, there’s a predisposition to place unwarranted trust in AI-driven decisions. For instance, in some US cities, AI-driven tools have mistakenly predicted higher crime rates in predominantly African-American neighbourhoods, reflecting historical biases rather than real-time threats. Even when they might be less accurate than human judgement, the allure of seemingly objective technology can override caution. This over-reliance can inadvertently spotlight certain individuals, like an innocent individual repeatedly misidentified due to algorithmic errors, sidelining other potential suspects.

Furthermore, biased algorithms have been observed to disproportionately affect the economically disadvantaged and ethnic minorities. A study from MIT Media Lab revealed that certain facial recognition software had higher error rates for darker-skinned individuals, leading to potential misidentifications. The use of AI in predictive policing, if informed by data from heavily surveilled neighbourhoods, can perpetuate existing prejudices. For example, if a neighbourhood with a high immigrant population is over-policed due to historical biases, AI might predict higher crime rates there purely based on past data, rather than the current situation. Such skewed data guides more police attention to these areas, further intensifying the disparities, creating a vicious cycle where over-policing results in more recorded incidents, which in turn results in even more policing.

Locally, concerns around transparency, privacy, and the handling of “dirty data”—information already tinged with human biases—have been raised in the context of the New Zealand government’s AI usage. Unfortunately, a legal structure tailored to the policing applications of AI is non-existent in New Zealand. While voluntary codes like the Australia New Zealand Police Artificial Intelligence Principles and the Algorithm Charter for Aotearoa New Zealand lay down ethical and operational guidelines, they fall short in establishing a robust, enforceable framework.

Despite the promise of constant AI system oversight and public channels for inquiry and challenges, there are evident lapses. The police’s nonchalance is evident from the absence of dedicated avenues for AI-related queries or concerns on their official website. With the police essentially overseeing themselves, coupled with the lack of an independent body to scrutinise their actions, the citizens are left in a precarious position.

As AI continues to gain momentum in the realm of governance, New Zealand, akin to its European counterparts, is confronted with the pressing need to introduce regulations. Such legal frameworks are paramount to ensuring that the police’s deployment of AI contributes constructively to society and doesn’t inadvertently exacerbate existing issues.

In conclusion, while the integration of AI into predictive policing offers an exciting frontier for enhancing law enforcement capabilities and efficiency, it is not without its challenges and ethical dilemmas. From the experiences of Durham Constabulary in the UK to the evolving landscape in New Zealand, the complexities of ensuring fairness, transparency, and privacy become evident. The juxtaposition of AI’s promise with its potential pitfalls underscores the imperative for stringent oversight, comprehensive legislation, and public engagement. As technology’s role in policing evolves, so must our approach to its governance, ensuring that in our quest for a safer society, we don’t inadvertently compromise the very values we aim to uphold.

Links

https://link.springer.com/article/10.1007/s00146-023-01751-9

https://www.deloitte.com/global/en/Industries/government-public/perspectives/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html

https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/

https://www.npr.org/2023/02/23/1159084476/know-it-all-ai-and-police-surveillance

The Challenges of Employing Black Box Algorithms in Healthcare

First published 2021; revised 2023

In recent years, the healthcare industry has been rapidly adopting artificial intelligence (AI) and machine learning (ML) technologies to enhance diagnostics, treatment plans, and patient outcomes. Among these technologies, black box algorithms have gained attention because of their ability to process vast amounts of complex data. However, the use of black box algorithms in healthcare also presents a range of significant problems.

Black box algorithms, often based on deep learning models, have shown remarkable accuracy in various applications. They can autonomously learn patterns from data, allowing them to make predictions and decisions. Currently, most AI algorithms are referred to as ‘Black Box’ systems because these algorithms produce results without offering insight into the reasoning or logic behind their predictions. It is therefore impossible to know how the algorithm came its conclusion. The algorithms are therefore thought of as impenetrable black boxes, where data is inputted and then a result is outputted. For example, when a person notices they have a friend recommendation on Facebook for somebody they don’t know, not even a Facebook engineer is able to explain how it happened. This introduces an inherent lack of transparency into the system.

While this opacity might be acceptable in certain domains, it poses challenges in healthcare. If a Black Box AI system makes a diagnosis, how it reached the diagnosis is unknown. However, when a doctor reaches a diagnosis, they are easily able to justify their decision and show the working steps they took to reach the diagnosis. Doctors agree to the Hippocratic Oath (to do no harm) and are held professionally accountable for their actions when treating a patient. Just as each individual doctor is accountable for their actions, the NHS as a whole must be able to justify its actions in the treatment of patients. Black box systems pose an issue to accountability as their actions cannot be justified, and this in turn poses an issue to due process and medico-legal actions.

One of the most pressing problems of employing black box algorithms in healthcare is their lack of transparency and interpretability. In the medical field, understanding the rationale behind an algorithm’s decision is crucial. To be used in a healthcare setting, AI systems must be transparent (opening the black box) and interpretable. Patients, doctors, and regulatory authorities require transparency in order to trust the recommendations made by AI systems. Black box algorithms, by their very nature, hinder the ability to explain why a certain diagnosis or treatment plan was suggested. This opacity can lead to scepticism, hinder accountability, and create ethical dilemmas. Transparency is necessary if healthcare workers and the public are to be able to trust the AI systems. Transparency is also required for good governance, and to ensure that healthcare professionals are not held liable for ‘mistakes’ made by AI if it is used to diagnose, or to help diagnose, conditions.

A proposed method of creating a transparent system is to show the relative importance of different aspects of a specific piece of data by using a saliency/attention map. The problem with attention maps is that they don’t explain anything other than where on the data set the AI is looking. They often use colour shading to represent areas of importance, with red the most important, and blue the least. If attention maps are used in a medical application, they could lead to misdiagnoses or no diagnosis at all, and even lead to conformation biases in high stake decision making. Clearly, attention maps are not sufficient for AI justifiability, at least not on their own. Currently, other methods of AI justifiability and interpretability are being explored, with an aim to produce a more robust solution for transparency.

Black box algorithms can inadvertently inherit biases present in the training data. In healthcare, biased algorithms could result in unequal treatment recommendations based on factors like gender, race, or socioeconomic status. This perpetuates disparities in patient care and can lead to severe consequences, both in terms of individual patient outcomes and broader societal implications. Ensuring fairness and equity becomes challenging when the decision-making process remains hidden within a black box. Ultimately, ‘Algorithmic Accountability’ is the recognition that the AI mistakes or wrongdoings are fundamentally because of human design and development. For autonomous and semiautonomous systems to be widespread within a future NHS, there needs to be a clear legal course of action for the event of AI wrongdoing, protocols for use, and good governance in place.

Moreover, black box algorithms require large amounts of data for accurate predictions, and there’s a significant risk associated with data privacy and security. The healthcare industry is a prime target for cyber-attacks, given the sensitive nature of medical records. The integration of AI and ML into healthcare means an increasing amount of patient data is stored and processed digitally. While AI can greatly benefit diagnostics and treatment, there’s a potential for misuse, especially when the decision-making process is not transparent. Ensuring data integrity and patient confidentiality while using AI tools is paramount. It is vital for developers and the healthcare community to collaborate in establishing robust data protection measures, continuous monitoring, and swift response mechanisms to any breaches or irregularities.

In conclusion, while the promise and potential of AI and ML in revolutionising the healthcare landscape are undeniable, the challenges posed by black box algorithms cannot be overlooked. The key to their successful integration lies in ensuring transparency, accountability, and equity. As technology continues to evolve, a multi-disciplinary approach involving technologists, medical professionals, ethicists, and policymakers will be essential to harness the benefits of AI while mitigating its risks. A future where AI aids healthcare without compromising trust or ethical principles is not just desirable but essential for the continued advancement of medical science.

Links

https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2023/july-2023-volume-108-issue-7/black-box-technology-shines-light-on-improving-or-safety-efficiency/

https://doi.org/10.5281/zenodo.3240529

https://www.investopedia.com/terms/b/blackbox.asp

https://doi.org/10.7326/0003-4819-124-2-199601150-00007